Advanced Software
Advanced Software
The waterfall model is also called as 'Linear sequential model' or 'Classic life cycle
model'.
In this model, each phase is fully completed before the beginning of the next phase.
This model is used for the small projects.
In this model, feedback is taken after each phase to ensure that the project is on the right
path.
Testing part starts only after the development is complete.
The waterfall model is simple and easy to understand, implement, and use.
All the requirements are known at the beginning of the project, hence it is easy to
manage.
It avoids overlapping of phases because each phase is completed at once.
This model works for small projects because the requirements are understood very well.
This model is preferred for those projects where the quality is more important as
compared to the cost of the project.
Disadvantages of the waterfall model
This model is not good for complex and object oriented projects.
It is a poor model for long projects.
The problems with this model are uncovered, until the software testing.
The amount of risk is high.
2. Incremental Process model
The incremental model combines the elements of waterfall model and they are applied in
an iterative fashion.
The first increment in this model is generally a core product.
Each increment builds the product and submits it to the customer for any suggested
modifications.
The next increment implements on the customer's suggestions and add additional
requirements in the previous increment.
This process is repeated until the product is finished.
For example, the word-processing software is developed using the incremental model.
This model is flexible because the cost of development is low and initial product delivery
is faster.
It is easier to test and debug during the smaller iteration.
The working software generates quickly and early during the software life cycle.
The customers can respond to its functionalities after every increment.
Disadvantages of the incremental model
The cost of the final product may cross the cost estimated initially.
This model requires a very clear and complete planning.
The planning of design is required before the whole system is broken into small
increments.
The demands of customer for the additional functionalities after every increment causes
problem during the system architecture.
3. RAD model
Business modeling consist of the flow of information between various functions in the
project.
For example what type of information is produced by every function and which are the
functions to handle that information.
A complete business analysis should be performed to get the essential business
information.
2. Data modeling
The information in the business modeling phase is refined into the set of objects and it is
essential for the business.
The attributes of each object are identified and define the relationship between objects.
3. Process modeling
The data objects defined in the data modeling phase are changed to fulfil the information
flow to implement the business model.
The process description is created for adding, modifying, deleting or retrieving a data
object.
4. Application generation
The prototypes are independently tested after each iteration so that the overall testing time
is reduced.
The data flow and the interfaces between all the components are are fully tested. Hence,
most of the programming components are already tested.
Agility and Process
The meaning of Agile is swift or versatile."Agile process model" refers to a software development
approach based on iterative development. Agile methods break tasks into smaller iterations, or parts
do not directly involve long term planning. The project scope and requirements are laid down at the
beginning of the development process. Plans regarding the number of iterations, the duration and the
scope of each iteration are clearly defined in advance.
Each iteration is considered as a short time "frame" in the Agile process model, which typically lasts
from one to four weeks. The division of the entire project into smaller parts helps to minimize the
project risk and to reduce the overall project delivery time requirements. Each iteration involves a
team working through a full software development life cycle including planning, requirements
analysis, design, coding, and testing before a working product is demonstrated to the client.
Scrum
Crystal
Dynamic Software Development Method(DSDM)
Feature Driven Development(FDD)
Lean Software Development
eXtreme Programming(XP)
Scrum
SCRUM is an agile development process focused primarily on ways to manage tasks in team-based
development conditions.
eXtreme Programming(XP)
This type of methodology is used when customers are constantly changing demands or requirements,
or when they are not sure about the system's performance.
Crystal:
There are three concepts of this method-
1. Chartering: Multi activities are involved in this phase such as making a development
team, performing feasibility analysis, developing plans, etc.
2. Cyclic delivery: under this, two more cycles consist, these are:
Team updates the release plan.
Integrated product delivers to the users.
3. Wrap up: According to the user environment, this phase performs deployment, post-
deployment.
Dynamic Software Development Method(DSDM):
DSDM is a rapid application development strategy for software development and gives an agile
project distribution structure. The essential features of DSDM are that users must be actively
connected, and teams have been given the right to make decisions. The techniques used in DSDM are:
The DSDM project contains seven stages:
1. Pre-project
2. Feasibility Study
3. Business Study
4. Functional Model Iteration
5. Design and build Iteration
6. Implementation
7. Post-project
Feature Driven Development(FDD):
This method focuses on "Designing and Building" features. In contrast to other smart methods, FDD
describes the small steps of the work that should be obtained separately per function.
Lean Software Development:
Lean software development methodology follows the principle "just in time production." The lean
method indicates the increasing speed of software development and reducing costs. Lean development
can be summarized in seven phases.
1. Eliminating Waste
2. Amplifying learning
3. Defer commitment (deciding as late as possible)
4. Early delivery
5. Empowering the team
6. Building Integrity
7. Optimize the whole
Advantage(Pros) of Agile Method:
1. Frequent Delivery
2. Face-to-Face Communication with clients.
3. Efficient design and fulfils the business requirement.
4. Anytime changes are acceptable.
5. It reduces total development time.
Disadvantages(Cons) of Agile Model:
1. Due to the shortage of formal documents, it creates confusion and crucial decisions taken
throughout various phases can be misinterpreted at any time by different team members.
2. Due to the lack of proper documentation, once the project completes and the developers
allotted to another project, maintenance of the finished project can become a difficulty.
Scrum
Scrum is a lightweight yet incredibly powerful set of values, principles, and practices. Scrum relies on
cross-functional teams to deliver products and services in short cycles, enabling:
Fast feedback
Quicker innovation
Continuous improvement
Rapid adaptation to change
More delighted customers
Accelerated pace from idea to delivery
Scrum is "a lightweight framework that helps people, teams and organizations generate value through
adaptive solutions for complex problems.1" Scrum is the most widely used and popular agile
framework. The term agile describes a specific set of foundational principles and values for
organizing and managing complex work.
Though it has its roots in software development, today scrum refers to a lightweight framework that is
used in every industry to deliver complex, innovative products and services that truly delight
customers. It is simple to understand, but difficult to master.
Scrum's Approach to Work
People are the focus of scrum. Scrum organizes projects using cross-functional teams, each one of
which has all of the capabilities necessary to deliver a piece of functionality from idea to delivery.
The scrum framework guides the creation of a product, focusing on value and high visibility of
progress. Working from a dynamic list of the most valuable things to do, a team brings that product
from an idea to life using the scrum framework as a guide for transparency, inspection, and
adaptation. The goal of scrum is to help teams work together to delight your customers.
The Scrum Team
Developers - On a scrum team, a developer is anyone on the team that is delivering work,
including those team members outside of software development. In fact, the 15th State of
Agile Report found that the number of non-software teams adopting agile frameworks like
scrum doubled from 2020 to 2021, with 27% reporting agile use in marketing, and between
10-16% reporting use in security, sales, finance, human resources, and more.
Product Owner - Holds the vision for the product and prioritizes the product backlog
Scrum Master - Helps the team best use scrum to build the product.
Scrum Artifacts
Product Backlog - An emergent, ordered list of what is needed to improve the product and
includes the product goal.
Sprint Backlog - The set of product backlog items selected for the sprint by the developers
(team members), plus a plan for delivering the increment and realizing the sprint goal.
Increment - A sum of usable sprint backlog items completed by the developers in the sprint
that meets the definition of done, plus the value of all the increments that came before. Each
increment is a recognizable, visibly improved, operating version of the product.
Scrum Commitments
Each artifact has an associated commitment - not to be confused with one of the scrum values
(covered below) - that ensures quality and keeps the team focused on delivering value to its users.
Scrum Events
Scrum teams work in sprints, each of which includes several events (or activities). Don't think of these
events as meetings or ceremonies; the events that are contained within each sprint are valuable
opportunities to inspect and adapt the product or the process (and sometimes both).
The Sprint - The heartbeat of scrum. Each sprint should bring the product closer to the
product goal and is a month or less in length.
Sprint Planning - The entire scrum team establishes the sprint goal, what can be done, and
how the chosen work will be completed. Planning should be timeboxed to a maximum of 8
hours for a month-long sprint, with a shorter timebox for shorter sprints.
Daily Scrum - The developers (team members delivering the work) inspect the progress
toward the sprint goal and adapt the sprint backlog as necessary, adjusting the upcoming
planned work. A daily scrum should be timeboxed to 15 minutes each day.
Sprint Review - The entire scrum team inspects the sprint's outcome with stakeholders and
determines future adaptations. Stakeholders are invited to provide feedback on the increment.
Sprint Retrospective - The scrum team inspects how the last sprint went regarding
individuals, interactions, processes, tools, and definition of done. The team identifies
improvements to make the next sprint more effective and enjoyable. This is the conclusion of
the sprint.
Scrum vs Agile:
The difference between agile and scrum is that agile refers to a set of principles and values shared by
several methodologies, processes, and practices; scrum is one of several agile frameworks—and is the
most popular. Learn more about agile vs scrum and how they differ from traditional project
management approaches.
Fundamentals of Agile and Scrum
Agile principles and values foster the mindset and skills businesses need in order to succeed in an
uncertain and turbulent environment. The term agile was first used in the Manifesto for Agile
Software Development (Agile Manifesto) back in 2001. The main tenets of the Agile Manifesto are:
Scrum fulfills the vision of the Agile Manifesto by helping individuals and businesses organize their
work to maximize collaboration, minimize red tape, deliver frequently, and create multiple
opportunities to inspect and adapt.
Why an Agile Framework Like Scrum Works
As mentioned in more detail above, scrum is an agile framework that helps companies meet complex,
changing needs while creating high-quality products and services. Scrum works by delivering large
projects in small chunks bite-sized increments that a cross-functional team can begin and complete in
one, short timeboxed iteration.
As each product increment is completed, teams review the functionality and then decide what to
create next based on what they learned and the feedback they received during the review.
Transparency
To make decisions, people need visibility into the process and the current state of the product. To
ensure everyone understands what they are seeing, participants in an empirical process must share one
language.
Inspection
To prevent deviation from the desired process or end product, people need to inspect what is being
created, and how, at regular intervals. Inspection should occur at the point of work but should not get
in the way of that work.
Adaptation
Adaptation means that when deviations occur, the process or product should be adjusted as soon as
possible. Scrum Teams Can Adapt the Product at the End of Every Sprint. Scrum allows for
adjustments at the end of every iteration.
Iterative
Iterative processes are a way to arrive at a decision or a desired result by repeating rounds of analysis
or a cycle of operations. The objective is to bring the desired decision or result closer to discovery
with each repetition (iteration). Scrum’s use of a repeating cycle of iterations is iterative.
Incremental
Incremental refers to a series of small improvements to an existing product or product line that usually
helps maintain or improve its competitive position over time. Incremental innovation is regularly used
within the high technology business by companies that need to continue to improve their products to
include new features increasingly desired by consumers. The way scrum teams deliver pieces of
functionality into small batches is incremental.
The Five Scrum Values
A team’s success with scrum depends on five values: commitment, courage, focus, openness, and
respect.
Commitment Allows Scrum Teams to Be Agile
The scrum value of commitment is essential for building an agile culture. Scrum teams work together
as a unit. This means that scrum and agile teams trust each other to follow through on what they say
they are going to do. When team members aren’t sure how work is going, they ask. Agile teams only
agree to take on tasks they believe they can complete, so they are careful not to overcommit.
Courage Allows Scrum Teams to Be Agile
The Scrum value of courage is critical to an agile team’s success. Scrum teams must feel safe enough
to say no, to ask for help, and to try new things. Agile teams must be brave enough to question the
status quo when it hampers their ability to succeed.
Focus Allows Scrum Teams to Be Agile
The scrum value of focus is one of the best skills scrum teams can develop. Focus means that
whatever scrum teams start they finish--so agile teams are relentless about limiting the amount of
work in process (limit WIP).
Openness Allows Scrum Teams to Be Agile
Scrum teams consistently seek out new ideas and opportunities to learn. Agile teams are also honest
when they need help.
Respect Allows Scrum Teams to Be Agile
Scrum team members demonstrate respect to one another, to the product owner, to stakeholders, and
to the Scrum Master. Agile teams know that their strength lies in how well they collaborate and that
everyone has a distinct contribution to make toward completing the work of the sprint. They respect
each other’s ideas, give each other permission to have a bad day once in a while, and recognize each
other’s accomplishments.
XP
Extreme programming (XP) is a software development methodology intended to improve software
quality and responsiveness to changing customer requirements. As a type of agile software
development.it advocates frequent releases in short development cycles, intended to improve
productivity and introduce checkpoints at which new customer requirements can be adopted.
XP is a lightweight, efficient, low-risk, flexible, predictable, scientific, and fun way to develop a
software.
eXtreme Programming (XP) was conceived and developed to address the specific needs of software
development by small teams in the face of vague and changing requirements.
Extreme Programming is one of the Agile software development methodologies. It provides values
and principles to guide the team behavior. The team is expected to self-organize. Extreme
Programming provides specific core practices where −
Communication
Simplicity
Feedback
Courage
Respect
Embrace Change
A key assumption of Extreme Programming is that the cost of changing a program can be held mostly
constant over time.
Writing unit tests before programming and keeping all of the tests running at all times. The
unit tests are automated and eliminates defects early, thus reducing the costs.
Starting with a simple design just enough to code the features at hand and redesigning when
required.
Programming in pairs (called pair programming), with two programmers at one screen, taking
turns to use the keyboard. While one of them is at the keyboard, the other constantly reviews
and provides inputs.
Integrating and testing the whole system several times a day.
Putting a minimal working system into the production quickly and upgrading it whenever
required.
Keeping the customer involved all the time and obtaining constant feedback.
Extreme Programming takes the effective principles and practices to extreme levels.
Code reviews are effective as the code is reviewed all the time.
Testing is effective as there is continuous regression and testing.
Design is effective as everybody needs to do refactoring daily.
Integration testing is important as integrate and test several times a day.
Short iterations are effective as the planning game for release planning and iteration planning.
Success in Industry
Rapid development.
Immediate responsiveness to the customer’s changing requirements.
Focus on low defect rates.
System returning constant and consistent value to the customer.
High customer satisfaction.
Reduced costs.
Team cohesion and employee satisfaction.
Extreme Programming solves the following problems often faced in the software development
projects −
Kanban often requires company-wide buy-in to be effective. Each department must be relied upon
to perform their necessary tasks at a specific time in order to transition the process to future
departments. Without this wide buy-in, kanban methodologies will be futile.
Visualize Workflows
At the heart of kanban, the process must be visually depicted. Whether by physical, tangible cards or
leveraging technology and software, the process must be shown step by step using visual cues that
make each tasks clearly identifiable. The idea is to clearly show what each step is, what expectations
are, and who will take what tasks.
Old-fashioned (but still used today) methods included drafting kanban tasks on sticky notes. Each
sticky note could be colored differently to signify different types of work items. These tasks would
then be placed into swim lanes, defined sections that group related tasks to create a more organized
project. Today, inventory management software typically drives kanban process.
Limit WIP
As kanban is rooted in efficiency, the goal of kanban is to minimize the amount of work in progress.
Teams are encouraged to complete prior tasks before moving on to a new one. This ensures that
future dependencies can be started earlier and that resources such as staff are not inefficiently
waiting to start their task while relying on others.
Manage Workflows
As a process is undertaken, a company will be able to identify strengths and weaknesses along the
work flow. Sometimes, limitations are not met or goals not achieved; in this case, it is up to the team
to manage the work flow and better understand the deficiencies that must be overcome.
As part of visually depicting workflows, processes are often clearly defined. Departments can often
easily understand the expectations placed on their teams, and kanban cards assigned to specific
individuals clearly identify responsibilities for each task. By very clearly defining policies, each
worker will understand what is expected of them, what checklist criteria must be met before
completion, and what occurs during the transition between steps.
Kanban Board
The kanban process utilizes kanban boards, organizational systems that clearly outline the elements
of a process. A kanban board often has three elements: boards, lists, and cards.
Kanban boards are the biggest picture of a process that organizes broad aspects of a workflow. For
example, a company may choose to have a different kanban board for different departments within
its organization (i.e. finance, marketing, etc.). The kanban board is used to gather relevant processes
within a single workspace or taskboard area.
To enable real-time demand signaling across the supply chain, electronic kanban systems have
become widespread. These e-kanban systems can be integrated into enterprise resource
planning (ERP) systems. These systems leverage digital kanban boards, lists, and cards that
communicate the status of processes across departments
Scrum and kanban both hold methodologies that help companies operate more efficiently. However,
each have very different approaches to achieving that efficiency. Scrum approaches affix certain
timeframes for changes to be made; during these periods, specific changes are made. With kanban,
changes are made continuously.
The scrum methodology breaks tasks into sprints, defined periods with start and end periods in
which the tasks are well defined and to be executed in a certain manner. No changes or deviations
from these timings or tasks should occur. Scrum is often measured by velocity or planned capacity,
and a product owner or scrum master oversees the process.
On the other hand, kanban is more adaptive in that it analyzes what has been done in the past and
makes continuous changes. Teams set their own cadence or cycles, and these cycles often change as
needed. Kanban measures success by measuring cycle time, throughput, and work in progress.
Benefits of Kanban
The idea of kanban carries various benefits, ranging from internal efficiencies to positive
impacts on customers.
The purpose of kanban is to visualize the flow of tasks and processes. For this reason,
kanban brings greater visibility and transparency to the flow of tasks and objectives. By
depicting steps and the order in which they must occur, project participants may get a better
sense of the flow of tasks and importance of interrelated steps.
Because kanban strives to be more efficient, companies using kanban often experience faster
turnaround times. This includes faster manufacturing processes, quicker packaging and
handling, and more efficient delivery times to customers. This reduces company carrying
costs (i.e. storage, insurance, risk of obsolescence) while also turning over capital quicker for
more efficient usage.
Companies that use kanban practices may also have greater predictability for what's to come.
By outlining future steps and tasks, companies may be able to get a better sense of risks,
roadblocks, or difficulties that would have otherwise slowed the process. Instead, companies
can preemptively plan to attack these deficiencies and allocate resources to combat hurdles
before they slow processes.
Last, the ultimate goal of kanban is to provide better service to customers. With more
efficient and less wasteful processes, customers may be charged lower prices. With faster
processes, customers may get their goods faster. By being on top of processes, customers
may be able to interact with customer service quicker and have resolutions met faster.
Disadvantages of Kanban
For some companies, kanban is not possible to be implemented or not feasible to practice.
First, kanban relies on stability; a company must have a predictable process that cannot
materially deviate. For companies operating in dynamic environments where activities are
not stable, the company may find it difficult to operate using kanban.
Kanban is often related to other production methodologies (just-in-time, scrum, etc.). For
this reason, a company may not reap all benefits if it only accepts kanban practices. For
example, a company may understand when it will need raw materials when reviewing
kanban cards; however, if the company does not utilize just-in-time inventory, it may be
incurring unnecessary expenses to carry the raw materials during periods when it is sitting
idle.
Kanban also has the demand of needing to be consistently updated for a few reasons. First, if
completed tasks are not marked off, the team analyzing next steps may not adequately assess
where along the process the team is at. Second, there is no timing assessments to different
phases, so team members must be aware of how much time is allocated to their task and
what future deadlines rely on the task at hand.
DevOps
The DevOps is the combination of two words, one is Development and other is Operations. It is a
culture to promote the development and operation process collectively.
The DevOps tutorial will help you to learn DevOps basics and provide depth knowledge of various
DevOps tools such as Git, Ansible, Docker, Puppet, Jenkins, Chef, Nagios, and Kubernetes.
The DevOps is a combination of two words, one is software Development, and second is Operations.
This allows a single team to handle the entire application lifecycle, from development to testing,
deployment, and operations. DevOps helps you to reduce the disconnection between software
developers, quality assurance (QA) engineers, and system administrators.
DevOps promotes collaboration between Development and Operations team to deploy code to
production faster in an automated & repeatable way.
DevOps has become one of the most valuable business disciplines for enterprises or organizations.
With the help of DevOps, quality, and speed of the application delivery has improved to a great
extent.
DevOps is nothing but a practice or methodology of making "Developers" and "Operations" folks
work together. DevOps represents a change in the IT culture with a complete focus on rapid IT
service delivery through the adoption of agile practices in the context of a system-oriented approach.
Why DevOps?
Before going further, we need to understand why we need the DevOps over the other methods.
1) Automation
Automation can reduce time consumption, especially during the testing and deployment phase. The
productivity increases, and releases are made quicker by automation. This will lead in catching bugs
quickly so that it can be fixed easily. For contiguous delivery, each code is defined through
automated tests, cloud-based services, and builds. This promotes production using automated
deploys.
2) Collaboration
The Development and Operations team collaborates as a DevOps team, which improves the cultural
model as the teams become more productive with their productivity, which strengthens
accountability and ownership. The teams share their responsibilities and work closely in sync, which
in turn makes the deployment to production faster.
3) Integration
Applications need to be integrated with other components in the environment. The integration phase
is where the existing code is combined with new functionality and then tested. Continuous
integration and testing enable continuous development. The frequency in the releases and micro-
services leads to significant operational challenges. To overcome such problems, continuous
integration and delivery are implemented to deliver in a quicker, safer, and reliable manner.
4) Configuration management
It ensures the application to interact with only those resources that are concerned with the
environment in which it runs. The configuration files are not created where the external
configuration to the application is separated from the source code. The configuration file can be
written during deployment, or they can be loaded at the run time, depending on the environment in
which it is running.
Advantages of DevOps
Disadvantages of DevOps
Prototype Construction
Prototyping is defined as the process of developing a working replication of a product or system that
has to be engineered. It offers a small scale facsimile of the end product and is used for obtaining
customer feedback as described below:
The Prototyping Model is one of the most popularly used Software Development Life Cycle Models
(SDLC models). This model is used when the customers do not know the exact project requirements
beforehand. In this model, a prototype of the end product is first developed, tested and refined as per
customer feedback repeatedly till a final acceptable prototype is achieved which forms the basis for
developing the final product.
In this process model, the system is partially implemented before or during the analysis phase thereby
giving the customers an opportunity to see the product early in the life cycle. The process starts by
interviewing the customers and developing the incomplete high-level paper model. This document is
used to build the initial prototype supporting only the basic functionality as desired by the customer.
There are four types of models available:
A) Rapid Throwaway Prototyping – This technique offers a useful method of exploring ideas and
getting customer feedback for each of them. In this method, a developed prototype need not
necessarily be a part of the ultimately accepted prototype. Customer feedback helps in preventing
unnecessary design faults and hence, the final prototype developed is of better quality.
B) Evolutionary Prototyping – In this method, the prototype developed initially is incrementally
refined on the basis of customer feedback till it finally gets accepted. In comparison to Rapid
Throwaway Prototyping, it offers a better approach which saves time as well as effort. This is because
developing a prototype from scratch for every iteration of the process can sometimes be very
frustrating for the developers.
C) Incremental Prototyping – In this type of incremental Prototyping, the final expected product is
broken into different small pieces of prototypes and being developed individually. In the end, when all
individual pieces are properly developed, then the different prototypes are collectively merged into a
single final product in their predefined order. It’s a very efficient approach that reduces the
complexity of the development process, where the goal is divided into sub-parts and each sub-part is
developed individually.
D) Extreme Prototyping – This method is mainly used for web development. It is consists of three
sequential independent phases:
D.1) In this phase a basic prototype with all the existing static pages are presented in the HTML
format.
D.2) In the 2nd phase, Functional screens are made with a simulated data process using a prototype
services layer.
D.3) This is the final step where all the services are implemented and associated with the final
prototype.
Advantages
The customers get to see the partial product early in the life cycle. This ensures a greater level
of customer satisfaction and comfort.
New requirements can be easily accommodated as there is scope for refinement.
Missing functionalities can be easily figured out.
Errors can be detected much earlier thereby saving a lot of effort and cost, besides enhancing
the quality of the software.
The developed prototype can be reused by the developer for more complicated projects in the
future.
Flexibility in design.
Disadvantages
Prototype Evaluation
Evaluation of a Prototype should be built into the development process. It should begin before any
technical phases and should continue beyond the life of the Prototype. It is the control mechanism for
the entire iterative design procedure. The evaluation process keeps the cost and effort of the
Prototype in line with its value. With constant evaluation, the system can die when the need for it is
over or it proves not to be valuable. Prototype evaluation can also help to quantify the impact of
decision-making processes on organisational goals. Sprague & Carlson consider what to measure,
how to measure it and presents a general model for evaluation. Prototype evaluations should be
considered as planned experiments designed to test one or more hypotheses.
Testing a prototype / developed design is a very important part of the design and manufacturing
process. Testing and evaluation, simply confirms that the product will work as it is supposed to,
or if it needs refinement.
In general, testing a prototype allows the designer and client to assess the viability of a design.
Will it be successful as a commercial product? Testing also helps identify potential faults, which
in turn allows the designer to make improvements.
There are many reasons why testing and evaluation takes place. Some reasons are described
below.
Testing and evaluation, allows the client / customer to view the prototype and to give his/her
views. Changes and improvements are agreed and further work carried out.
A focus group can try out the prototype and give their views and opinions. Faults and problems
are often identified at this stage.
Suggestions for improvement are often made at this stage. Safety issues are sometimes
identified, by thorough testing and evaluation. The prototype can be tested to British and
European Standards.
The prototype can be tested against any relevant regulations and legislation. Adjustments /
improvements to the design can then be made.
Evaluating a prototype allows the production costs to be assessed and finalised. Every stage of
manufacturing can be scrutinised for potential costs.
If the client has set financial limits / restrictions, then alterations to the design or manufacturing
processes, may have to be made. This may lead to alternative and cheaper manufacturing
processes being selected, for future production.
Component failure is often identified during the testing process. This may mean a component is
redesign and not the entire product. Sometimes a component or part of a product, will be tested
separately and not the whole product.
This allows more specific tests to be carried out. Evaluating the manufacture of the prototype,
allows the designer to plan an efficient and cost effective manufacturing production line.
Prototype testing can be carried out alongside the testing of similar designs or even the products
of competitors. This may lead to improvements.
Testing ensures that any user instructions can be worked out, stage by stage, so that the future
consumer can use the product efficiently and safely. This guarantees customer satisfaction.
Prototype Model
The prototype model requires that before carrying out the development of actual software, a working
prototype of the system should be built. A prototype is a toy implementation of the system. A
prototype usually turns out to be a very crude version of the actual system, possible exhibiting
limited functional capabilities, low reliability, and inefficient performance as compared to actual
software. In many instances, the client only has a general view of what is expected from the software
product. In such a scenario where there is an absence of detailed information regarding the input to
the system, the processing needs, and the output requirement, the prototyping model may be
employed.
Steps of Prototype Model
Evolutionary process model resembles the iterative enhancement model. The same phases are
defined for the waterfall model occurs here in a cyclical fashion. This model differs from the
iterative enhancement model in the sense that this does not require a useful product at the end of
each cycle. In evolutionary development, requirements are implemented by category rather than by
priority.
Although each project will have its own prototyping goal and a choice of fidelity that supports your
strategy, the following seven principles of effective prototyping should always be kept in mind:
In a previous chapter, we spoke about having a prototyping goal. If you have defined a prototyping
goal in advance, your chances of success are much higher!
Paper prototyping is the best approach for your very first prototype. Not only it is quicker to create a
paper prototype than any other type, but also there is something inherently creative about using a pen
or pencil.
The very first prototype that you do is really for yourself. It is to get your own thoughts straight and to
be able to create a few different versions of something to see what you like. It's important to be able
to quickly visualize several possibilities and to see what works the best.
I have found in the past that certain issues (particularly those involving the flow of screens) are only
discovered when you actually see them linked together and when you actually try to click through the
flow of a certain sequence.
A prototype is a proxy experience for the final product. Although it may not be fully "real", it should
be "real enough" for the purposes of the feedback you want to create.
When working with other team members, it may be that we want to achieve a shared understanding of
what we are about to build. When showing a prototype to a potential customer, it may be that we want
to see if they understand how to use the product in front of them and/or have observations about why
it would be a success or failure.
In both cases, it is better for you to say nothing at first. Watch the person interact with the prototype.
Is their reaction and understanding what you expected? If not, is it clear why not? If your prototype is
as clear as you think, you should not have to describe anything! Watching the person use the
prototype and checking if they follow the intended flows, experience anxiety over what their available
options are, or display enthusiasm at an outcome being so easily achieved are what you are looking
for.
When building a prototype, the chances are that at least some elements of that prototype have been
done before in other products. Become a student of the products around you.
There are lots of great sites out there that show lists of existing design paradigms in the product
landscape. For example, if you were thinking about building a calendar, you could check out the
calendar pattern on pttrns.com. This will give you lots of ideas and get your juices flowing. Bear in
mind this is just inspiration, and your context will be different to some of the sites or apps you see
here.
It is recommended to prototype anything that cannot be built easily by your team. The reason for this
is that if there is enough demand for it, a solution can be found. When we prototype, we want to
escape the boundaries of what is sensible and possible. Rather we want to assume that (by magic if
necessary), the ideal solutions to a problem are available to us.
If something is hard to build, then it is particularly important that we get feedback on what the
reaction would be if this feature or product were built. If the reaction is hugely positive, then a great
question is, "How can we make this possible, given that everyone wants it?" This is a better approach
than "Let's build it and see if anyone wants it."
It is important to keep scope to what is required for your reason for prototyping.
1. Starting with a paper prototype is ideal. This can get you to the point where you are relatively
happy with how many screens there are and what the major elements on each screen are.
2. Then often a black and white prototype is best for getting internal agreement with the team
and stakeholders of what to build. Eliminating color and major design elements (e.g. just
using a black rectangle instead of an image) will prevent conversations around design
happening too soon (i.e before the main interactions are decided). Also, do not include too
many screens. You are trying to build consensus within the team and if you include some
non-related features, the conversation could end up being focused there!
3. Finally, a prototype with high-fidelity design is usually used when showing the prototype to
customers (whether it is clickable or fully interactive). The same rule applies to not showing
too many screens. If there are just enough screens to represent those customer outcomes and
flows that you wish to learn more about as defined by your prototyping goal - then that's
enough!
Iterate!
The main goal of prototyping is to create an early version of a product/feature that allows us to get
feedback that informs later versions of the product.
Specifically, if we get valuable feedback and new insight, the quickest way to verify that we have
correctly understood that feedback is another prototype. It is typical for prototypes to go through
several rounds of iterations in this fashion.
The key question to ask yourself after showing a version of a prototype a few times is, "Have I
learned anything new?" If you are not learning anything new, then you may not have any need for
further prototype iterations.
Requirements Engineering
Requirements engineering (RE) refers to the process of defining, documenting, and maintaining
requirements in the engineering design process. Requirement engineering provides the appropriate
mechanism to understand what the customer desires, analyzing the need, and assessing feasibility,
negotiating a reasonable solution, specifying the solution clearly, validating the specifications and
managing the requirements as they are transformed into a working system. Thus, requirement
engineering is the disciplined application of proven principles, methods, tools, and notation to
describe a proposed system's intended behavior and its associated constraints.
1. Feasibility Study
2. Requirement Elicitation and Analysis
3. Software Requirement Specification
4. Software Requirement Validation
5. Software Requirement Management
1. Feasibility Study:
The objective behind the feasibility study is to create the reasons for developing the software that is
acceptable to users, flexible to change and conformable to established standards.
Types of Feasibility:
1. Technical Feasibility - Technical feasibility evaluates the current technologies, which are
needed to accomplish customer requirements within the time and budget.
2. Operational Feasibility - Operational feasibility assesses the range in which the required
software performs a series of levels to solve business problems and customer requirements.
3. Economic Feasibility - Economic feasibility decides whether the necessary software can
generate financial profits for an organization.
This is also known as the gathering of requirements. Here, requirements are identified with the help of
customers and existing systems processes, if available.
Analysis of requirements starts with requirement elicitation. The requirements are analyzed to identify
inconsistencies, defects, omission, etc. We describe requirements in terms of relationships and also
resolve conflicts if any.
Software requirement specification is a kind of document which is created by a software analyst after
the requirements collected from the various sources - the requirement received by the customer
written in ordinary language. It is the job of the analyst to write the requirement in technical language
so that they can be understood and beneficial by the development team.
o Data Flow Diagrams: Data Flow Diagrams (DFDs) are used widely for modeling the
requirements. DFD shows the flow of data through a system. The system may be a company,
an organization, a set of procedures, a computer hardware system, a software system, or any
combination of the preceding. The DFD is also known as a data flow graph or bubble chart.
o Data Dictionaries: Data Dictionaries are simply repositories to store information about all
data items defined in DFDs. At the requirements stage, the data dictionary should at least
define customer data items, to ensure that the customer and developers use the same
definition and terminologies.
o Entity-Relationship Diagrams: Another tool for requirement specification is the entity-
relationship diagram, often called an "E-R diagram." It is a detailed logical representation of
the data for the organization and uses three main constructs i.e. data entities, relationships,
and their associated attributes.
After requirement specifications developed, the requirements discussed in this document are
validated. The user might demand illegal, impossible solution or experts may misinterpret the needs.
Requirements can be the check against the following conditions -
o If they can practically implement
o If they are correct and as per the functionality and specially of software
o If there are any ambiguities
o If they are full
o If they can describe
Software Requirements: Largely software requirements must be categorized into two categories:
Scenario-based Modelling
Scenario-based modeling has no one, correct way to proceed, but different processes can meet
different aspects. Explore requirements modeling and scenario-based modeling as well as use case
and activity diagrams and discover how to apply them to determine the best way to proceed.
Requirements Modeling
Requirements modeling is the process of identifying the requirements this software solution must
meet in order to be successful. Requirements modeling contains several sub-stages, typically:
scenario-based modeling, flow-oriented modeling, data modeling, class-based modeling, and
behavioral modeling. Also, as the term ''modeling'' implies, all of these stages typically result in
producing diagrams that visually convey the concepts they identify. The most common method for
creating these diagrams is Unified Modeling Language (UML).
Use Case Diagrams
The use case is essentially a primary example of how the proposed software application or system is
meant to be used, from the user's point of view. A use case diagram will typically show system actors,
humans or other entities external to the system and how they interact with the system. Technically,
each action such a system actor can perform with the application or system is considered to be a
separate use case.
If we want to draw a use case diagram in UML first, we must study the complete system
appropriately. We need to find out every function which is offered by the system. When we find out
all the system's functionalities then we convert these functionalities into a number of use cases, and
we use these use-cases in the use case diagram.
A use case means essential functionality of any working system. When we organize the use cases,
then next we need to enlist the numerous actors or things that will collaborate with the system. These
actors are used to implement the functionality of a system. Actors can be someone or something. It
can likewise be a private system's entity. The actors should be pertinent to the functionality or a
system in which the actors are interacting.
o The use case name and actor name should be meaningful and related to the system.
o The actor's interaction with the use case should be well-described and in a comprehensible
manner.
o Use annotations wherever they are essential.
o If the actor or use case has many relationships, then display only important interactions.
The use-case diagram is an extraordinary system's functionality that is accomplished by a client. The
objective of use-case diagram is to capture the system's key functionalities and visualize the
interactions of different thinkings known as actors with the use case. It is the basic use of use-case
diagram.
With the help of the use-case diagram, we can characterize the system's main part and flow of work
among them. In the use-case, the implementation of details is hidden from external use, and only the
flow of the event is represented.
Using use-case diagrams, we can detect the pre-and post-conditions after communication with the
actor. We can determine these conditions using several test cases.
Use cases are planned to convey wanted functionality so that the exact scope of use case can differ
based on the system and the purpose of making the UML model.
o It must be complete.
o It must be simple.
o The use-case diagram must show each and every interaction with the use case.
o It is must that the use-case should be generalized if it is large.
o At least one system module must be defined in the use case diagram.
o When there are number of actors or use-cases in the use-case diagram, only the significant
use-cases must be represented.
o The use-case diagrams must be clear and easy so that anyone can understand them easily.
Use-case diagram provides an outline related to all components in the system. Use-case
diagram helps to define the role of administrators, users, etc.
The use-Case diagram helps to provide solutions and answers to various questions that
may pop up if you begin a project unplanned.
It helps us to define the needs of the users extensively and explore how it will work.
System
With the help of the rectangle, we can draw the boundaries of the system, which includes use-cases.
We need to put the actors outside the system's boundaries.
Use-Case
With the help of the Ovals, we can draw the use-cases. With the verb we have to label the ovals in
order to represent the functions of the system.
Actors
Actors mean the system's users. If one system is the actor of the other system, then with the actor
stereotype, we have to tag the actor system.
Relationships
With the simple line we can represent relationships between an actor and use cases. For relationships
between use-case, we use arrows which are labeled either "extends" or "uses". The "extends"
relationship shows the alternative options under the specific use case. The "uses" relationship shows
that single use-case is required to accomplish a job.
With regards to examine the system's requirements, use-case diagrams are another one to one. Use-
cases are simple to understand and visual. The following are some guidelines that help you to make
better use cases that are appreciated by your customers and peers the same.
Generally, the use-case diagram contains use-cases, relationships, and actors. Systems and boundaries
may be included in the complex larger diagrams. We'll talk about the guidelines of the use-case
diagram on the basis of the objects.
Actors
Use-Cases
Systems/Packages
Class-based Modelling
The class diagram depicts a static view of an application. It represents the types of objects residing in
the system and the relationships between them. A class consists of its objects, and also it may inherit
from other classes. A class diagram is used to visualize, describe, document various different aspects
of the system, and also construct executable software code. It shows the attributes, classes, functions,
and relationships to give an overview of the software system. It constitutes class names, attributes,
and functions in a separate compartment that helps in software development.
Upper Section: The upper section encompasses the name of the class. A class is a
representation of similar objects that shares the same relationships, attributes, operations,
and semantics. Some of the following rules that should be taken into account while
representing a class are given below:
1. Capitalize the initial letter of the class name.
2. Place the class name in the center of the upper section.
3. A class name must be written in bold format.
4. The name of the abstract class should be written in italics format.
Middle Section: The middle section constitutes the attributes, which describe the quality
of the class. The attributes have the following characteristics:
The attributes are written along with its visibility factors, which are public (+), private (-
), protected (#), and package (~).
Lower Section: The lower section contain methods or operations. The methods are
represented in the form of a list, where each method is written in a single line. It
demonstrates how a class interacts with data.
Relationships
The company encompasses a number of employees, and even if one employee resigns, the company
still exists.
Composition: The composition is a subset of aggregation. It portrays the dependency between the
parent and its child, which means if one part is deleted, then the other part also gets discarded. It
represents a whole-part relationship.
A contact book consists of multiple contacts, and if you delete the contact book, all the contacts will
be lost.
Abstract Classes
In the abstract class, no objects can be a direct entity of the abstract class. The abstract class can
neither be declared nor be instantiated. It is used to find the functionalities across the classes. The
notation of the abstract class is similar to that of class; the only difference is that the name of the
class is written in italics. Since it does not involve any implementation for a given function, it is best
to use the abstract class with multiple objects.
Let us assume that we have an abstract class named displacement with a method declared inside it,
and that method will be called as a drive (). Now, this abstract class method can be implemented by
any object, for example, car, bike, scooter, cycle, etc.
How to draw a Class Diagram?
1. To describe a complete aspect of the system, it is suggested to give a meaningful name to the
class diagram.
2. The objects and their relationships should be acknowledged in advance.
3. The attributes and methods (responsibilities) of each class must be known.
4. A minimum number of desired properties should be specified as more number of the
unwanted property will lead to a complex diagram.
5. Notes can be used as and when required by the developer to describe the aspects of a
diagram.
6. The diagrams should be redrawn and reworked as many times to make it correct before
producing its final version.
The class diagram is used to represent a static view of the system. It plays an essential role in the
establishment of the component and deployment diagrams. It helps to construct an executable code
to perform forward and backward engineering for any system, or we can say it is mainly used for
construction. It represents the mapping with object-oriented languages that are C++, Java, etc. Class
diagrams can be used for the following purposes:
Functional Modelling gives the process perspective of the object-oriented analysis model and an
overview of what the system is supposed to do. It defines the function of the internal processes in the
system with the aid of Data Flow Diagrams (DFDs). It depicts the functional derivation of the data
values without indicating how they are derived when they are computed, or why they need to be
computed.
Processes,
Data Flows,
Actors, and
Data Stores.
Constraints, and
Control Flows.
Features of a DFD
Processes
Processes are the computational activities that transform data values. A whole system can be
visualized as a high-level process. A process may be further divided into smaller components. The
lowest-level process may be a simple function.
Representation in DFD − A process is represented as an ellipse with its name written inside it and
contains a fixed number of input and output data values.
Example − The following figure shows a process Compute_HCF_LCM that accepts two integers as
inputs and outputs their HCF (highest common factor) and LCM (least common multiple).
Data Flows
Data flow represents the flow of data between two processes. It could be between an actor and a
process, or between a data store and a process. A data flow denotes the value of a data item at some
point of the computation. This value is not changed by the data flow.
Representation in DFD − A data flow is represented by a directed arc or an arrow, labelled with the
name of the data item that it carries.
In the above figure, Integer_a and Integer_b represent the input data flows to the process, while
L.C.M. and H.C.F. are the output data flows.
Actors
Actors are the active objects that interact with the system by either producing data and inputting
them to the system, or consuming data produced by the system. In other words, actors serve as the
sources and the sinks of data.
Representation in DFD − An actor is represented by a rectangle. Actors are connected to the inputs
and outputs and lie on the boundary of the DFD.
Example − The following figure shows the actors, namely, Customer and Sales_Clerk in a counter
sales system.
Data Stores
Data stores are the passive objects that act as a repository of data. Unlike actors, they cannot perform
any operations. They are used to store data and retrieve the stored data. They represent a data
structure, a disk file, or a table in a database.
Representation in DFD − A data store is represented by two parallel lines containing the name of
the data store. Each data store is connected to at least one process. Input arrows contain information
to modify the contents of the data store, while output arrows contain information retrieved from the
data store.
Example − The following figure shows a data store, Sales_Record, that stores the details of all sales.
Input to the data store comprises of details of sales such as item, billing amount, date, etc. To find
the average sales, the process retrieves the sales records and computes the average.
Constraints
Constraints specify the conditions or restrictions that need to be satisfied over time. They allow
adding new rules or modifying existing ones. Constraints can appear in all the three models of
object-oriented analysis.
Example − The following figure shows a portion of DFD for computing the salary of employees of a
company that has decided to give incentives to all employees of the sales department and increment
the salary of all employees of the HR department. It can be seen that the constraint {Dept:Sales}
causes incentive to be calculated only if the department is sales and the constraint {Dept:HR} causes
increment to be computed only if the department is HR.
Control Flows
A process may be associated with a certain Boolean value and is evaluated only if the value is true,
though it is not a direct input to the process. These Boolean values are called the control flows.
Representation in DFD − Control flows are represented by a dotted arc from the process producing
the Boolean value to the process controlled by them.
Example − The following figure represents a DFD for arithmetic division. The Divisor is tested for
non-zero. If it is not zero, the control flow OK has a value True and subsequently the Divide process
computes the Quotient and the Remainder.
In order to develop the DFD model of a system, a hierarchy of DFDs are constructed. The top-level
DFD comprises of a single process and the actors interacting with it. At each successive lower level,
further details are gradually included.
Example − Let us consider a software system, Wholesaler Software, that automates the transactions
of a wholesale shop. The shop sells in bulks and has a clientele comprising of merchants and retail
shop owners. Each customer is asked to register with his/her particulars and is given a unique
customer code, C_Code. Once a sale is done, the shop registers its details and sends the goods for
dispatch. Each year, the shop distributes Christmas gifts to its customers, which comprise of a silver
coin or a gold coin depending upon the total sales and the decision of the proprietor.
Customers
Salesperson
Proprietor
In the next level DFD, as shown in the following figure, the major processes of the system are
identified, the data stores are defined and the interaction of the processes with the actors, and the
data stores are established.
Register Customers
Process Sales
Ascertain Gifts
Customer Details
Sales Details
Gift Details
The following figure shows the details of the process Register Customer. There are three processes
in it, Verify Details, Generate C_Code, and Update Customer Details. When the details of the
customer are entered, they are verified. If the data is correct, C_Code is generated and the data store
Customer Details is updated.
Advantages Disadvantages
DFDs depict the boundaries of a system and hence DFDs take a long time to create, which may not
are helpful in portraying the relationship between be feasible for practical purposes.
the external objects and the processes within the
system.
They help the users to have a knowledge about the DFDs do not provide any information about the
system. time-dependent behavior, i.e., they do not
specify when the transformations are done.
The graphical representation serves as a blueprint They do not throw any light on the frequency of
for the programmers to develop a system. computations or the reasons for computations.
DFDs provide detailed information about the The preparation of DFDs is a complex process
system processes. that needs considerable expertise. Also, it is
difficult for a non-technical person to
understand.
They are used as a part of the system The method of preparation is subjective and
documentation. leaves ample scope to be imprecise.
Behavioural Modelling
Behavioral models contain procedural statements, which control the simulation and manipulate
variables of the data types. These statements are contained within the procedures. Each of the
procedures has an activity flow associated with it. During the behavioral model simulation, all the
flows defined by the always and initial statements start together at simulation time zero.
The initial statements are executed once, the always statements are executed repetitively.
Example
The register variables a and b are initialized to binary 1 and 0 respectively at simulation time
zero.The initial statement is completed and not executed again during that simulation run. This initial
statement is containing a begin-end block of statements. In this begin-end type block, a is initialized
first, followed by b.
Procedural Assignments
Procedural assignments are for updating integer, reg, time, and memory variables. There is a
significant difference between a procedural assignment and continuous assignment, such as:
1. Continuous assignments drive net variables, evaluated, and updated whenever an input operand
changes value. The procedural assignments update the value of register variables under the control of
the procedural flow constructs that surround them.
2. The right-hand side of a procedural assignment can be any expression that evaluates to a value.
However, part-selects on the right-hand side must have constant indices. The left-hand side indicates
the variable that receives the assignment from the right-hand side. The left-hand side of a procedural
assignment can take one of the following forms:
o Register, integer, real, or time variable: An assignment to the name reference of one of these
data types.
o Bit-select of a register, integer, real, or time variable: An assignment to a single bit that
leaves the other bits untouched.
o Part-select of a register, integer, real, or time variable: A part-select of two or more
contiguous bits that leave the rest of the bits untouched. For the part-select form, only
constant expressions are legal.
o Memory element: A single word of memory. Bit-selects and part-selects are illegal on
memory element references.
o Concatenation of any of the above: A concatenation of any of the previous four forms can be
specified, which effectively partitions the result of the right-hand side expression and then
assigns the partition parts, in order, to the various parts of the concatenation.
Delay in Assignment
In a delayed assignment, Δt time units pass before the statement is executed, and the left-hand
assignment is made. With an intra-assignment delay, the right side is evaluated immediately, but
there is a delay of Δt before the result is placed in the left-hand assignment.
If another procedure changes a right-hand side signal during Δt.
Syntax
Blocking Assignments
A blocking procedural assignment statement must be executed before executing the statements that
follow it in a sequential block. The statement does not prevent the execution of statements that
follow it in a parallel block.
Syntax
The non-blocking procedural assignment is used to schedule assignments without blocking the
procedural flow. We can use the non-blocking procedural statement whenever we want to make
several register assignments within the same time step without regard to order or dependence upon
each other.
Syntax
Simulator evaluates and executes the non-blocking procedural assignment in two steps:
Step 1: The simulator evaluates the right-hand side and schedules the new value assignment at a
time specified by a procedural timing control.
Step 2: At the end of the time step, when the given delay has expired, or the appropriate event has
taken place, the simulator executes the assignment by assigning the value to the left-hand side.
Case Statement
The case statement is a unique multi-way decision statement that tests whether an expression
matches several other expressions, and branches accordingly. The case statement is useful for
describing, for example, the decoding of a microprocessor instruction. The case statement differs
from the multi-way if-else-if construct in two essential ways, such as:
1. The conditional expressions in the if-else-if construct are more general than comparing one
expression with several others, as in the case statement.
2. The case statement provides a definitive result when there are x and z values in an expression.
Looping Statements
There are four types of looping statements. They are used to controlling the execution of a statement
zero, one, or more times.
Step 1: Executes an assignment normally used to initialize a variable that controls the number of
loops executed.
Step 2: Evaluates an expression. Suppose the result is zero, then the for loop exits. And if it is not
zero, for loop executes its associated statements and then performs step 3.
Step 3: Executes an assignment normally used to modify the loop control variable's value, then
repeats step 2.
Delay Controls
Verilog handles the delay controls in the following ways, such as:
1. Delay Control
<statement>
::= <delay_control> <statement_or_null>
<delay_control>
::= # <NUMBER>
||= # <identifier>
||= # ( <mintypmax_expression> )
The following example delays the execution of the assignment by 10-time units.
2. Event Control
The execution of a procedural statement can be synchronized with a value change on a net or
register, or the occurrence of a declared event.
Verilog syntax also used to detect change based on the direction of the change, which is toward the
value 1 (posedge) or the value 0 (negedge).
The behavior of posedge and negedge for unknown expression values are:
Procedures
1. Initial blocks
2. Always blocks
Initial Blocks
The initial and always statements are enabled at the beginning of the simulation. The initial blocks
execute only once, and its activity dies when the statement has finished.
Syntax
<initial_statement>
::= initial <statement>
Always Blocks
The always blocks repeatedly executes. Its activity dies only when the simulation is terminated.
There is no limit to the number of initial and always blocks defined in a module.
Syntax
<always_statement>
::= always <statement>
UNIT II SOFTWARE DESIGN
Design Concepts
Software design principles are concerned with providing means to handle the complexity of the design
process effectively. Effectively managing the complexity will not only reduce the effort needed for
design but can also reduce the scope of introducing errors during design.
Problem Partitioning
For small problem, we can handle the entire problem at once but for the significant problem, divide the
problems and conquer the problem it means to divide the problem into smaller pieces so that each piece
can be captured separately. For software design, the goal is to divide the problem into manageable
pieces.
These pieces cannot be entirely independent of each other as they together form the system. They have
to cooperate and communicate to solve the problem. This communication adds complexity.
Abstraction
An abstraction is a tool that enables a designer to consider a component at an abstract level without
bothering about the internal details of the implementation. Abstraction can be used for existing element
as well as the component being designed.
1. Functional Abstraction
2. Data Abstraction
Functional Abstraction
Data Abstraction
Details of the data elements are not visible to the users of data. Data Abstraction forms the basis
for Object Oriented design approaches.
Modularity
Modularity specifies to the division of software into separate modules which are differently named and
addressed and are integrated later on in to obtain the completely functional software. It is the only
property that allows a program to be intellectually manageable.
Each module is a well-defined system that can be used with other applications.
Each module has single specified objectives.
Modules can be separately compiled and saved in the library.
Modules should be easier to use than to build.
Modules are simpler from outside than inside.
Advantages of Modularity
Disadvantages of Modularity
Modular design reduces the design complexity and results in easier and faster implementation by
allowing parallel development of various parts of a system. We discuss a different section of modular
design in detail in this section:
2. Information hiding: The fundamental of Information hiding suggests that modules can be
characterized by the design decisions that protect from the others, i.e., In other words, modules should
be specified that data include within a module is inaccessible to other modules that do not need for such
information.
Strategy of Design
A good system design strategy is to organize the program modules in such a method that are easy to
develop and latter too, change. Structured design methods help developers to deal with the size and
complexity of programs. Analysts generate instructions for the developers about how code should be
composed and how pieces of code should fit together to form a program.To design a system, there are
two possible approaches:
1. Top-down Approach: This approach starts with the identification of the main components and then
decomposing them into their more detailed sub-components.
2. Bottom-up Approach: A bottom-up approach begins with the lower details and moves towards up
the hierarchy, as shown in fig. This approach is suitable in case of an existing system.
Design Model
Design modeling in software engineering represents the features of the software that helps
engineer to develop it effectively, the architecture, the user interface, and the component
level detail. Design modeling provides a variety of different views of the system like
architecture plan for home or building. Different methods like data-driven, pattern-driven, or
object-oriented methods are used for constructing the design model. All these methods use
set of design principles for designing a model.
Designing a model is an important phase and is a multi-process that represent the data
structure, program structure, interface characteristic, and procedural details. It is mainly
classified into four categories – Data design, architectural design, interface design, and
component-level design.
Data design: It represents the data objects and their interrelationship in an entity-relationship
diagram. Entity-relationship consists of information required for each entity or data objects
as well as it shows the relationship between these objects. It shows the structure of the data
in terms of the tables. It shows three type of relationship – One to one, one to many, and
many to many. In one to one relation, one entity is connected to another entity. In one many
relation, one Entity is connected to more than one entity.
Architectural design: It defines the relationship between major structural elements of the
software. It is about decomposing the system into interacting components. It is expressed as
a block diagram defining an overview of the system structure – features of the components
and how these components communicate with each other to share data. It defines the
structure and properties of the component that are involved in the system and also the inter-
relationship among these components.
User Interfaces design: It represents how the Software communicates with the user i.e. the
behavior of the system. It refers to the product where user interact with controls or displays
of the product. For example, Military, vehicles, aircraft, audio equipment, computer
peripherals are the areas where user interface design is implemented. UI design becomes
efficient only after performing usability testing. This is done to test what works and what
does not work as expected.
Component level design: It transforms the structural elements of the software architecture
into a procedural description of software components. It is a perfect way to share a large
amount of data. Components need not be concerned with how data is managed at a
centralized level.
Analysis model represents the information, functions, and behavior of the system. Design
model translates all these things into architecture – a set of subsystems that implement major
functions and a set of component kevel design that are the realization of Analysis classes.
This implies that design model must be traceable to the analysis model.
Software architecture is the skeleton of the system to be built. It affects interfaces, data
structures, behavior, program control flow, the manner in which testing is conducted,
maintainability of the resultant system, and much more.
Focus on the design of the data:
Data design encompasses the manner in which the data objects are realized within the
design. It helps to simplify the program flow, makes the design and implementation of the
software components easier, and makes overall processing more efficient.
The user interface is the main thing of any software. No matter how good its internal
functions are or how well designed its architecture is but if the user interface is poor and
end-users don’t feel ease to handle the software then it leads to the opinion that the software
is bad.
Coupling of different components into one is done in many ways like via a component
interface, by messaging, or through global data. As the level of coupling increases, error
propagation also increases, and overall maintainability of the software decreases. Therefore,
component coupling should be kept as low as possible.
The data flow between components decides the processing efficiency, error flow, and design
simplicity. A well-designed interface makes integration easier and tester can validate the
component functions more easily.
It means that functions delivered by component should be cohesive i.e. it should focus on
one and only one function or sub-function.
Conclusion
Here in this article, we have discussed the basics of design modeling in software engineering
along with its principles.
Software Architecture
The architecture of a system describes its major components, their relationships (structures), and
how they interact with each other. Software architecture and design includes several contributory
factors such as Business strategy, quality attributes, human dynamics, design, and IT environment.
We can segregate Software Architecture and Design into two distinct phases: Software Architecture
and Software Design. In Architecture, nonfunctional decisions are cast and separated by the
functional requirements. In Design, functional requirements are accomplished. Architecture serves as
a blueprint for a system. It provides an abstraction to manage the system complexity and establish a
communication and coordination mechanism among components.
It defines a structured solution to meet all the technical and operational requirements, while
optimizing the common quality attributes like performance and security.
Further, it involves a set of significant decisions about the organization related to software
development and each of these decisions can have a considerable impact on quality,
maintainability, performance, and the overall success of the final product. These decisions
comprise of −
o Selection of structural elements and their interfaces by which the system is
composed.
o Behavior as specified in collaborations among those elements.
o Composition of these structural and behavioral elements into large subsystem.
o Architectural decisions align with business objectives.
o Architectural styles guide the organization.
Software Design
Software design provides a design plan that describes the elements of a system, how they fit, and
work together to fulfill the requirement of the system. The objectives of having a design plan are as
follows −
To negotiate system requirements, and to set expectations with customers, marketing, and
management personnel.
Act as a blueprint during the development process.
Guide the implementation tasks, including detailed design, coding, integration, and testing.
It comes before the detailed design, coding, integration, and testing and after the domain analysis,
requirements analysis, and risk analysis.
Goals of Architecture
The primary goal of the architecture is to identify requirements that affect the structure of the
application. A well-laid architecture reduces the business risks associated with building a technical
solution and builds a bridge between business and technical requirements.
Expose the structure of the system, but hide its implementation details.
Realize all the use-cases and scenarios.
Try to address the requirements of various stakeholders.
Handle both functional and quality requirements.
Reduce the goal of ownership and improve the organization’s market position.
Improve quality and functionality offered by the system.
Improve external confidence in either the organization or system.
Limitations
A Software Architect provides a solution that the technical team can create and design for the entire
application. A software architect should have expertise in the following areas −
Design Expertise
Expert in software design, including diverse methods and approaches such as object-oriented
design, event-driven design, etc.
Lead the development team and coordinate the development efforts for the integrity of the
design.
Should be able to review design proposals and tradeoff among themselves.
Domain Expertise
Expert on the system being developed and plan for software evolution.
Assist in the requirement investigation process, assuring completeness and consistency.
Coordinate the definition of domain model for the system being developed.
Technology Expertise
Methodological Expertise
Expert on software development methodologies that may be adopted during SDLC (Software
Development Life Cycle).
Choose the appropriate approaches for development that helps the entire team.
Facilitates the technical work among team members and reinforcing the trust relationship in
the team.
Information specialist who shares knowledge and has vast experience.
Protect the team members from external forces that would distract them and bring less value
to the project.
Deliverables of the Architect
Quality Attributes
Quality is a measure of excellence or the state of being free from deficiencies or defects. Quality
attributes are the system properties that are separate from the functionality of the system.
Implementing quality attributes makes it easier to differentiate a good system from a bad one.
Attributes are overall factors that affect runtime behavior, system design, and user experience.
Reflect the structure of a system and organization, directly related to architecture, design, and source
code. They are invisible to end-user, but affect the development and maintenance cost, e.g.:
modularity, testability, maintainability, etc.
Reflect the behavior of the system during its execution. They are directly related to system’s
architecture, design, source code, configuration, deployment parameters, environment, and
platform.They are visible to the end-user and exist at runtime, e.g. throughput, robustness,
scalability, etc.
Quality Scenarios
Quality scenarios specify how to prevent a fault from becoming a failure. They can be divided into
six parts based on their attribute specifications −
Architectural Styles
an architectural style is a large-scale, predefined solution structure. Using an architectural styles helps
us to build the system quicker than building everything from scratch. Architectural styles are similar
to patterns, but provide a solution for a larger challenge.
In this blog we study several architectural styles for communication in distributed systems. The REST
style (Representational State Transfer), the REST-like style, the RPC style (Remote Procedure Call),
the SOAP style and GraphQL. We compare the approaches, show advantages and disadvantages,
commonalities and differences. APIs can basically be realized using any of these styles. How do we
know, whether a particular architectural style is appropriate for a given API?
When realizing a new API, an appropriate API philosophy should be chosen, such
as GraphQL, REST, SOAP or RPC.
Once the bigger-picture, architectural design decisions are nailed, frontend design decisions can be
handled. These design decisions should be documented by refining and updating the API description.
The API description thus becomes an evolving, single source of truth about the current state of the
system.
REST Style
REST (Representational State Transfer) is an architectural style for services, and as such it defines a
set of architectural constraints and agreements. A service, which complies with the REST constraints,
is said to be RESTful.
REST is designed to make optimal use of an HTTP-based infrastructure and due to the success of the
web, HTTP-based infrastructure, such as servers, caches and proxies, are widely available. The web,
which is based on HTTP, provides some proof for an architecture that not only scales extremely well
but also has longevity. The basic idea of REST is to transfer the ideas that worked well for the web
and apply them to web services.
HATEOAS is an abbreviation for Hypermedia As The Engine Of Application State. HATEOAS is the
aspect of REST, which allows for dynamic architectures. It allows clients to explore any API without
any a-priori knowledge of data formats or of the API itself.
REST-like APIs
There is a large group of APIs, which claim to follow the REST Style, but actually, they don’t. They
only implement some elements of REST, but at its core, they are RPC APIs.Johnson Maturity Index
may be helpful
RPC Style
RPC is an abbreviation for Remote Procedure Call. RPC is an architectural style for distributed
systems. It has been around since the 1980s. Today the most widely used RPC styles are JSON-RPC
and XML-RPC. Even SOAP can be considered to follow an RPC architectural style.
The central concept in RPC is the procedure. The procedures do not need to run on the local machine,
but they can run on a remote machine within the distributed system. When using an RPC framework,
calling a remote procedure should be as simple as calling a local procedure.
SOAP Style
SOAP follows the RPC style (see previous section) and exposes procedures as central concepts (e.g.
getCustomer). It is standardized by the W3C and is the most widely used protocol for web services.
SOAP style architectures are in widespread use, however, typically only for company internal use or
for services called by trusted partners.
GraphQL Style
For a long time, REST was thought to be the only appropriate tool for building modern APIs. But in
recent years, another tool was added to the toolbox, when Facebook published GraphQL, the
philosophy, and framework powering its popular API. More and more tech companies
tried GraphQL and adopted it as one of their philosophies for API design. Some built a GraphQL API
next to their existing REST API, some replaced their REST API with GraphQL, and even others have
ignored the GraphQL trend to focus single-mindedly on their REST API.
But, not only the tech companies are divided. Following the discussions around REST and GraphQL,
there seem to be two camps of gurus leading very emotional discussions: “always use the hammer,”
one camp proclaims. “NO, always use the screwdriver,” the other camp insists. And for the rest of
us? Unfortunately, this situation is confusing, leading to paralysis and indecision about API design.
The intention of the Book on REST & GraphQL is to clear up the confusion and enable you to make
your own decision, the decision that is right for your API. By having the necessary criteria for
comparison and general properties, strengths, and weaknesses of the approach, you can choose if the
hammer or the screwdriver is better suited for your API project.
Conclusion
APIs can basically be realized using any of these styles. How do we know, whether a particular
architectural style is appropriate for a given API? The resulting API exposes many of the previously
stated desirable properties.
Most commonly, APIs are realized using REST over HTTP. This is why one can assume in practice
that APIs are realized with the REST style.
Architectural Design
The software needs the architectural design to represents the design of software. IEEE defines
architectural design as “the process of defining a collection of hardware and software components and
their interfaces to establish the framework for the development of a computer system.” The software
that is built for computer-based systems can exhibit one of these many architectural styles.
Each style will describe a system category that consists of :
A set of components (eg: a database, computational modules) that will perform a function
required by the system.
The set of connectors will help in coordination, communication, and cooperation between the
components.
Conditions that how components can be integrated to form the system.
Semantic models that help the designer to understand the overall properties of the system.
The use of architectural styles is to establish a structure for all the components of the system.
Taxonomy of Architectural styles:
1. Object Oriented architecture: The components of a system encapsulate data and the operations
that must be applied to manipulate the data. The coordination and communication between the
components are established via the message passing.
2. Layered architecture:
A number of different layers are defined with each layer performing a well-defined set of
operations. Each layer will do some operations that becomes closer to machine instruction set
progressively.
At the outer layer, components will receive the user interface operations and at the inner
layers, components will perform the operating system interfacing(communication and
coordination with OS)
Intermediate layers to utility services and application software functions.
Component-Level Design
Component-based architecture focuses on the decomposition of the design into individual functional or
logical components that represent well-defined communication interfaces containing methods, events,
and properties. It provides a higher level of abstraction and divides the problem into sub-problems, each
associated with component partitions.
The primary objective of component-based architecture is to ensure component reusability. A
component encapsulates functionality and behaviors of a software element into a reusable and self-
deployable binary unit. There are many standard component frameworks such as COM/DCOM,
JavaBean, EJB, CORBA, .NET, web services, and grid services. These technologies are widely used in
local desktop GUI application design such as graphic JavaBean components, MS ActiveX components,
and COM components which can be reused by simply drag and drop operation.
Component-oriented software design has many advantages over the traditional object-oriented
approaches such as −
Reduced time in market and the development cost by reusing existing components.
Increased reliability with the reuse of the existing components.
What is a Component?
A component is a modular, portable, replaceable, and reusable set of well-defined functionality that
encapsulates its implementation and exporting it as a higher-level interface.
A component is a software object, intended to interact with other components, encapsulating certain
functionality or a set of functionalities. It has an obviously defined interface and conforms to a
recommended behavior common to all components within an architecture.
A software component can be defined as a unit of composition with a contractually specified interface
and explicit context dependencies only. That is, a software component can be deployed independently
and is subject to composition by third parties.
Views of a Component
A component can have three different views − object-oriented view, conventional view, and process-
related view.
Object-oriented view
A component is viewed as a set of one or more cooperating classes. Each problem domain class
(analysis) and infrastructure class (design) are explained to identify all attributes and operations that
apply to its implementation. It also involves defining the interfaces that enable classes to communicate
and cooperate.
Conventional view
It is viewed as a functional element or a module of a program that integrates the processing logic, the
internal data structures that are required to implement the processing logic and an interface that enables
the component to be invoked and data to be passed to it.
Process-related view
In this view, instead of creating each component from scratch, the system is building from existing
components maintained in a library. As the software architecture is formulated, components are selected
from the library and used to populate the architecture.
A user interface (UI) component includes grids, buttons referred as controls, and utility
components expose a specific subset of functions used in other components.
Other common types of components are those that are resource intensive, not frequently
accessed, and must be activated using the just-in-time (JIT) approach.
Many components are invisible which are distributed in enterprise business applications and
internet web applications such as Enterprise JavaBean (EJB), .NET components, and CORBA
components.
Characteristics of Components
Reusability − Components are usually designed to be reused in different situations in different
applications. However, some components may be designed for a specific task.
Replaceable − Components may be freely substituted with other similar components.
Not context specific − Components are designed to operate in different environments and
contexts.
Extensible − A component can be extended from existing components to provide new behavior.
Encapsulated − A A component depicts the interfaces, which allow the caller to use its
functionality, and do not expose details of the internal processes or any internal variables or
state.
Independent − Components are designed to have minimal dependencies on other components.
Creates a naming conventions for components that are specified as part of the architectural model and
then refines or elaborates as part of the component-level model.
Attains architectural component names from the problem domain and ensures that they have
meaning to all stakeholders who view the architectural model.
Extracts the business process entities that can exist independently without any associated
dependency on other entities.
Recognizes and discover these independent entities as new components.
Uses infrastructure component names that reflect their implementation-specific meaning.
Models any dependencies from left to right and inheritance from top (base class) to bottom
(derived classes).
Model any component dependencies as interfaces rather than representing them as a direct
component-to-component dependency.
Recognizes all design classes that correspond to the problem domain as defined in the analysis model
and architectural model.
Recognizes all design classes that correspond to the infrastructure domain.
Describes all design classes that are not acquired as reusable components, and specifies message
details.
Identifies appropriate interfaces for each component and elaborates attributes and defines data
types and data structures required to implement them.
Describes processing flow within each operation in detail by means of pseudo code or UML
activity diagrams.
Describes persistent data sources (databases and files) and identifies the classes required to
manage them.
Develop and elaborates behavioral representations for a class or component. This can be done
by elaborating the UML state diagrams created for the analysis model and by examining all use
cases that are relevant to the design class.
Elaborates deployment diagrams to provide additional implementation detail.
Demonstrates the location of key packages or classes of components in a system by using class
instances and designating specific hardware and operating system environment.
The final decision can be made by using established design principles and guidelines.
Experienced designers consider all (or most) of the alternative design solutions before settling
on the final design model.
Advantages
Ease of deployment − As new compatible versions become available, it is easier to replace
existing versions with no impact on the other components or the system as a whole.
Reduced cost − The use of third-party components allows you to spread the cost of
development and maintenance.
Ease of development − Components implement well-known interfaces to provide defined
functionality, allowing development without impacting other parts of the system.
Reusable − The use of reusable components means that they can be used to spread the
development and maintenance cost across several applications or systems.
Modification of technical complexity − A component modifies the complexity through the use
of a component container and its services.
Reliability − The overall system reliability increases since the reliability of each individual
component enhances the reliability of the whole system via reuse.
System maintenance and evolution − Easy to change and update the implementation without
affecting the rest of the system.
Independent − Independency and flexible connectivity of components. Independent
development of components by different group in parallel. Productivity for the software
development and future software development.
User Experience Design
User interface design is also known as user interface engineering. User interface design means the
process of designing user interfaces for software and machines like a mobile device, home appliances,
computer, and another electronic device with the aim of increasing usability and improving the user
experience.
Choosing Interface Components
Users have become aware of interface components acting in a certain manner, so try to be predictable
and consistent in our selections and their layout. As a result, task completion, satisfaction, and
performance, will increase.
Interface components may involve:
Input Controls: Input Controls involve buttons, toggles, dropdown lists, checkboxes, date fields,
radio buttons, and text fields.
Navigational Components: Navigational components contain slider, tags, pagination, search field,
breadcrumb, icons.
Informational Components: Informational Components contain tooltips, modal windows, progress
bar, icons, notification message boxes.
Containers: Containers include accordion.
Many components may be suitable to display content at times. When this happens, it is crucial to
think about this trade-off.
Best Practices for Designing an Interface
It All starts with getting to know your users, which contains understanding about their interests,
abilities, tendencies, and habits. If you have figured out who your customer is, keep the following in
mind when designing your interface:
What do you think the user would like the system to do?
What role does the system fit in the user's everyday activities or workflow?
How technically savvy is the user, and what other systems does the user already use?
What styles of user interface look and feel do you think the user prefers?
Information Architecture
Process development or the system's information flow (means for phone tree systems, this will be a
choice tree flowchart for phone tree systems, and for the website, this will be site flow that displays
the page's hierarchy).
Prototyping
The wire-frame's the development either in the form of simple interactive screens or paper prototypes.
To focus on the interface, these prototypes are stripped of all look and feel components as well as the
majority of the content.
Usability Inspection
Allowing an evaluator to examine a user interface. It is typically less expensive to implement as
compared to usability testing, and in the development process, it can be used early. It may be used
early in the development process to determine requirements for the system, which are usually unable
to be tested on the users.
Usability Testing
Prototypes are tested on a real user, often using a method known as think-aloud protocol, in which we
can ask the user to speak about their views during the experience. The testing of user interface design
permits the designer to understand the reception from the viewer's perspective, making it easier to
create effective applications.
Graphical User Interface Design
It is the actual look and feel of the design of the final graphical user interface (GUI). These are the
control panels and faces of design; voice-controlled interfaces contain oral-auditory interaction, while
gesture-based interfaces users involve with 3D design spaces through physical motions.
Software Maintenance
After a new interface is deployed, it may be necessary to perform routine maintenance in order to fix
software bugs, add new functionality or fully update the system. When the decision is taken to update
the interface, the legacy system will go through a new iteration of the design process.
User Interface Design Requirements
The dynamic characteristics of a system are defined in terms of the dialogue requirements contained
in 7 principles of part 10 of the ergonomics standard, the ISO 9241. This standard provides a system
of ergonomic "principles" for the dialogue techniques along with the high-level concepts, examples,
and implementations. The principles of the dialogue reflect the interface's dynamic aspects and mostly
thought of as the interface's "feel." The following are the seven dialogue principles:
The degree to which the overall system's expected objectives of use are met is how usable
it is (effectiveness).
The resources must be spent in order to achieve the desired outcomes (efficiency).
The degree to which the user finds the entire system acceptable (satisfaction).
Usability factors include effectiveness, efficiency, and satisfaction. In order to assess these factors,
they must first be split into sub-factors and then into usability measures.Part 12 of the ISO 9241
standard specifies the organization of information such as alignment, arrangement, location, grouping,
arrangement, display of the graphical objects, and the information's coding (colour, shape, visual cues,
size, abbreviation) by seven attributes. The seven-presentation characteristic are as follows:
Prompts indicating that the system is available for input explicitly (specific prompts) or
implicitly (generic prompts).
Feedback informing related to the input of the user timely, non-intrusive, and perceptible.
Details about the application's current state, the system's hardware and software, and the
user's activities.
Error management contains error detection, error correction, error message, and user
support for error management.
Online assistance for both system-initiated and user-initiated requests with detailed
information for the current context of usage.
How to Make Great UIs
Remember that the users are people with needs like comfort and a mental capacity limit when creating
a stunning GUI. The following guidelines should be followed:
1. Create buttons, and other popular components that behave predictably (with responses
like pinch-to-zoom) so that users can use them without thinking. Form must follow
function.
2. Keep your discoverability high. Mark icons clearly and well-defined affordances, such as
shadows for buttons.
3. The interface should be simple (including elements that help users achieve their goals)
and create an "invisible" feel.
4. In terms of layout, respect the user's eyes and attention. Place emphasis on hierarchy and
readability:
User proper alignment: Usually select edge (over center) alignment.
Draw attention to Key features using:
o Colour, brightness, and contrast are all important factors to consider Excessive
use of colors or buttons should be avoided.
o Font sizes, italics, capitals, bold type/weighting, and letter spacing are all used to
create text. User should be able to Deduce meaning simply by scanning.
o Regardless of the context, always have the next steps that the user can naturally
deduce.
o Use proper UI design patterns to assist users in navigating and reducing burdens
such as pre-fill forms. Dark patterns like hard-to-see prefilled opt-in/opt-out
checkboxes and sneaking objects into the user's carts should be avoided.
o Keep user informed about system responses/actions with feedback.
Principles of User Interface Design
1. Clarity is Job
The interface's first and most essential task is to provide clarity. To be effective in using the interface
you designed, people need to identify what it is, regardless of why they will use it, understand what
the interface is doing in interaction with them. It assists them in anticipating what will occur as they
use it.
2. Keep Users in Control
Humans are most at ease when they have control of themselves and their surroundings. Unthoughtful
software robs people of their comfort by dragging them into unexpected encounters, unexpected
outcomes, and confusing pathways
3. Conserve Attention at All Cost
We live in a world that is constantly interrupted. It is difficult to read in peace these days without
anything attempting to divert our focus. Attention is a valuable commodity. Distracting content should
not be strewn around the side of your applications… keep in mind why the screen exists in the first
place.
4. Interfaces Exist to Enable Interaction
Interaction between humans and our world is allowed by interfaces. They can support, explain, allow,
display associations, illuminate, bring us together, separate us, handle expectations, and provide
access to service. Designing a user interface is not an artistic endeavour. Interfaces are not stand-alone
landmarks.
5. Keep Secondary Actions Secondary
Multiple secondary actions may be added to screens with a single primary action, but they must be
held secondary. Your article presents not so that individuals can post it on Twitter but so that people
can read and comprehend it.
6. Provide a Natural Next Step
Few interactions are intended to be the last, so consider designing the last move for every interaction
used with your interface. Predict what the next interaction will be and design to accommodate it. Just
as we are interested in human conversation, offer an opportunity for more discussion. Don't leave
anyone hanging because they did what you wish them to do.
7. Direct Manipulation is Best
There is no need for an interface if we can directly access the physical objects in our universe. We
build interfaces to help us interact with objects because this is not always easy, and objects are
becoming increasingly informational.
8. Highlight, Don't Determine, with Colour
When the light changes, the colour of the physical object changes. In the full light of day, we see very
different tree outlines against a sunset. As in the real world, where colour is a multi-shaded object,
colour does not decide anything in an interface. It can be useful for highlighting and directing focus,
but it should not be the only way to distinguish objects.
9. Progressive Disclosure
On each screen, just show what is needed. If people must make a decision, give them sufficient
information in order to make that decision, then go into more details on a subsequent screen. Avoid
the popular trap of over-explaining or showing all at once. Defer decisions to subsequent screens
wherever possible by gradually revealing information as needed. Your experiences would be clearer
as a result of this.
10. Strong Visual Hierarchies Work Best
When the visual elements on a computer are arranged in a simple viewing order, it creates a powerful
visual hierarchy. This means when users consistently see the same objects in the same order. The
weak visual hierarchies offer some guidance related to where one should gaze and relax and feel
disorganized and confused.
11. Help People Inline
Help is not needed in ideal interaction because the interface is usable and learner. The step below that,
fact, is one in which assistance is inline and contextual, accessible only when and where it is required
and concealed at all other times.
12. Build on Other Design Principle
Visual and graphic design, visualization, typography, information architecture, and copywriting all of
these disciplines are the part of the interface design. They may be briefly discussed or trained in.
Don't get caught up in turf battles or dismiss other disciplines; instead, take what you need from them
and keep moving forward.
13. Great Design is Invisible
The interesting thing about good design is that it usually goes unobserved by the people who use it.
One reason for this is that if the design is effective, then the user will be able to concentrate on their
own objectives rather than the interface.
14. Interfaces Exist to be Used
Interface design, like most design disciplines, is effective when people use what you have created.
Design fails if people choose not to utilize it, just like a beautiful chair which is painful to sit in. As a
result, interface design can be more related to building a user-friendly experience as it is about
designing a useful artifact.
15. A Crucial Moment: The Zero State
The first time a user interacts with an interface is critical, but designers often ignore it. It's better to
plan for the zero state, or the state where nothing has happened yet, to great support our users get up
to speed with our designs. This is not supposed to be a blank canvas
Mistakes to Avoid in UI Design
Do not implement a user-centred design. This is an easy part to overlook, but it is one of
the most critical aspects of the UI design. User's desires, expectations, and the problems
should all be considered when designing.
Excessive use of dynamic effects: Using a lot of animation effects is not always a sign of
a good design. As a result, limiting the use of decorative animations will help to improve
the user experience.
Preparing so much in advance: Particularly in the early stages of design, we just need to
have the appropriate image of the design in our heads and get to work. However, this
strategy is not always successful.
Not Leaning more about the target audience: - This point once again, demonstrates what
we have just discussed. Rather than designing with your own desires and taste in mind,
imagine yourself as the consumer.
Essential Tools for User Interface Design
1. Sketch
It is a user design tool mainly used by numerous UI and UX designers to design and prototyping
mobile and web applications. The Sketch is a vector graphics editor that permits designers to create
user interfaces efficiently and quickly.There are various features of Sketch:
o Slicing and Exporting
Sketch gives users a lot of slicing control, allowing them to choose slice, and export any layer
or object they want.
o Symbols
Using this feature, user can build pre-designed elements which can be easily re-used as well
as replicated in any artboard or project. This feature will help designers save time and build a
design library for potential projects.
o Plugins
Maybe a feature you are looking for is not available in the default sketch app. In that
situation, you don't have to worry; there are number of created plugins that can be
downloaded externally and added to our sketch app. The options are limitless!
2. Adobe XD
It is a vector-based tool. We use this tool for designing interfaces and prototyping for mobile
applications as well as the web. Adobe XD is just like Photoshop and illustrator, but it focuses on user
interface design. Adobe XD has the advantage of including UI kits for Windows, Apple, and Google
Material Design, which helps designers create user interfaces for each device.
Features of Adobe XD
o Voice Trigger
Voice Trigger is an innovative feature introduced by Adobe XD which permits prototypes to
be manipulated via voice commands.
o Responsive Resize
Using this feature, we can automatically adjust and resize objects/elements which are present
on the artboards based on the size of the screen or platform required.
o Collaboration
We can connect Adobe XD to other tools like Slack, allowing the team to collaborate across
platforms like Windows and macOS.
3. Invision Studios
It is a simple vector-based drawing tool with design, animation, and prototyping capabilities. Invision
studios is a relatively new tool, but it has ready demonstrated a high level of ambition through its
numerous available functionalities and remarkable prototyping capabilities
Features of the Invision Studios
o Advanced Animations
With the various animations provided by studios, animating your prototype has become even
more exciting. We can expect higher fidelity prototypes with this feature, including auto-layer
linking, timeline editions, and smart-swipe gestures.
o Responsive Design
The responsive design feature saves a lot of time because it eliminates the need of multiple
artboards when designing for numerous devices. Invision studios permit users to create a
single artboard that can be adjusted based on the intended device.
o Synced Workflow
Studios enable a synchronised workflow across all projects, from start to finish, in order to
support team collaboration. This involves real-time changes and live concept collaboration, as
well as the ability to provide instant feedback.
4. UXPin
Another amazing tool for the design user interface is UXPin that comes with the abilities of designing
and prototyping. In contrast to other user interface tools, this tool is recommended to be a better fit for
large design teams and projects. UXPin also comes with UI element libraries which give you access to
Material Design, iOS libraries, Bootstrap, and variety of icons.
Features of UXPin
Mobile support
Collaboration
Presentation tools
Drag and Drop
Mockup Creation
Protype Creation
Interactive Elements
Feedback Collection
Feedback Management
5. Framer X
Framer X was released in 2018. It is one of the newest design tools which is used to design digital
products from mobile applications to websites. The interesting feature of this tool is the capability to
prototype along with the advanced interactions and animations while also integrating the code's
components.
Features of the Framer X
Problem Given:
Suppose you want to create a class for which only a single instance (or object) should be created and
that single object can be used by all other classes.
Solution:
Singleton design pattern is the best solution of above specific problem. So, every design pattern
has some specification or set of rules for solving the problems. What are those specifications, you will
see later in the types of design patterns.
Advantage of design pattern:
1. They are reusable in multiple projects.
2. They provide the solutions that help to define the system architecture.
3. They capture the software engineering experiences.
4. They provide transparency to the design of an application.
5. They are well-proved and testified solutions since they have been built upon the
knowledge and experience of expert software developers.
6. Design patterns don?t guarantee an absolute solution to a problem. They provide clarity to
the system architecture and the possibility of building a better system.
When should we use the design patterns?
We must use the design patterns during the analysis and requirement phase of SDLC(Software
Development Life Cycle).Design patterns ease the analysis and requirement phase of SDLC by
providing information based on prior hands-on experiences.
Categorization of design patterns:
1. Core Java (or JSE) Design Patterns.
2. JEE Design Patterns.
Core Java Design Patterns
In core java, there are mainly three types of design patterns, which are further divided into their sub-
parts:
1.Creational Design Pattern
1. Factory Pattern
2. Abstract Factory Pattern
3. Singleton Pattern
4. Prototype Pattern
5. Builder Pattern.
2. Structural Design Pattern
1. Adapter Pattern
2. Bridge Pattern
3. Composite Pattern
4. Decorator Pattern
5. Facade Pattern
6. Flyweight Pattern
7. Proxy Pattern
3. Behavioral Design Pattern
1. Chain Of Responsibility Pattern
2. Command Pattern
3. Interpreter Pattern
4. Iterator Pattern
5. Mediator Pattern
6. Memento Pattern
7. Observer Pattern
8. State Pattern
9. Strategy Pattern
10. Template Pattern
11. Visitor Pattern
UNIT III SYSTEM DEPENDABILITY AND SECURITY
Dependable Systems
For many computer-based systems, the most important system property is the dependability of the
system. The dependability of a system reflects the user's degree of trust in that system. It reflects the
extent of the user's confidence that it will operate as users expect and that it will not 'fail' in normal
use. System failures may have widespread effects with large numbers of people affected by the
failure. Systems that are not dependable and are unreliable, unsafe or insecure may be rejected by
their users.
Causes of failure:
Hardware failure
Hardware fails because of design and manufacturing errors or because components have reached the
end of their natural life.
Software failure
Software fails due to errors in its specification, design or implementation.
Operational failure
Human operators make mistakes. Now perhaps the largest single cause of system failures in socio-
technical systems.
Dependability properties
Principal properties of dependability:
Principal properties:
Availability: The probability that the system will be up and running and able to deliver
useful services to users.
Reliability: The probability that the system will correctly deliver services as expected by
users.
Safety: A judgment of how likely it is that the system will cause damage to people or its
environment.
Security: A judgment of how likely it is that the system can resist accidental or deliberate
intrusions.
Resilience: A judgment of how well a system can maintain the continuity of its critical
services in the presence of disruptive events such as equipment failure and cyberattacks.
Other properties of software dependability:
Repairability reflects the extent to which the system can be repaired in the event of a
failure;
Maintainability reflects the extent to which the system can be adapted to new
requirements;
Survivability reflects the extent to which the system can deliver services whilst under
hostile attack;
Error tolerance reflects the extent to which user input errors can be avoided and
tolerated.
Many dependability attributes depend on one another. Safe system operation depends on the system
being available and operating reliably. A system may be unreliable because its data has been
corrupted by an external attack.
How to achieve dependability?
Property Description
The volume of a system (the total space occupied) varies depending on how the
Volume
component assemblies are arranged and connected.
The security of the system (its ability to resist attack) is a complex property that cannot
Security be easily measured. Attacks may be devised that were not anticipated by the system
designers and so may defeat built-in safeguards.
This property reflects how easy it is to fix a problem with the system once it has been
Repairability discovered. It depends on being able to diagnose the problem, access the components
that are faulty, and modify or replace these components.
This property reflects how easy it is to use the system. It depends on the technical
Usability
system components, its operators, and its operating environment.
Regulation and compliance
Many critical systems are regulated systems, which means that their use must be approved by an
external regulator before the systems go into service. Examples: nuclear systems, air traffic control
systems, medical devices. A safety and dependability case has to be approved by the regulator.
Redundancy and diversity
Redundancy: Keep more than a single version of critical components so that if one fails then a
backup is available.
Diversity: Provide the same functionality in different ways in different components so that they will
not fail in the same way.
Redundant and diverse components should be independent so that they will not suffer from 'common-
mode' failures.
Process activities, such as validation, should not depend on a single approach, such as testing, to
validate the system. Redundant and diverse process activities are important especially for verification
and validation. Multiple, different process activities the complement each other and allow for cross-
checking help to avoid process errors, which may lead to errors in the software.
Dependable processes
To ensure a minimal number of software faults, it is important to have a well-defined, repeatable
software process. A well-defined repeatable process is one that does not depend entirely on individual
skills; rather can be enacted by different people. Regulators use information about the process to
check if good software engineering practice has been used.
Dependable process characteristics:
Explicitly defined
A process that has a defined process model that is used to drive the software production process. Data
must be collected during the process that proves that the development team has followed the process
as defined in the process model.
Repeatable
A process that does not rely on individual interpretation and judgment. The process can be repeated
across projects and with different team members, irrespective of who is involved in the development.
Dependable process activities
Requirements reviews to check that the requirements are, as far as possible, complete
and consistent.
Requirements management to ensure that changes to the requirements are controlled
and that the impact of proposed requirements changes is understood.
Formal specification, where a mathematical model of the software is created and
analyzed.
System modeling, where the software design is explicitly documented as a set of
graphical models, and the links between the requirements and these models are
documented.
Design and program inspections, where the different descriptions of the system are
inspected and checked by different people.
Static analysis, where automated checks are carried out on the source code of the
program.
Test planning and management, where a comprehensive set of system tests is designed.
Dependability Properties
Correctness
Reliability
Robustness
Safety
Correctness
Statistical approximation to correctness: probability that a system deviates form the expected
behaviour
o Likelihood against given specifications
Unlike correctness it is defined against an operational profile of a software system
The probability that a given number of users (workload intensity) would access a system/
functionality/service/operation concurrently
It is a quantitative characterization of how a system will be used
It shows how to increase productivity and reliability and speed development by allocating
development resources to function on the basis of use
Major Measures of Reliability
Availability: the portion of time in which the software operates with no down time
Time Between Failures: the time elapsing between two consecutive failures
Cumulative number of failures: the total number of failures occurred at time
Robustness
Unusual circumstance: unforeseen (not in the specifications) load of users accessing a web
site
Robust software:
A workaround: Maintain the same throughput speed while stopping last arrived users until the
load is decreased
It does not decrease performance for registered users
Action to be taken to increase robustness: Augment software specifications with appropriate
responses to given unusual circumstances (enrich the operational profile with unlikely
situations)
Safety
Security
Socio-technical theory has at its core the idea that the design and performance of any organisational
system can only be understood and improved if both ‘social’ and ‘technical’ aspects are brought
together and treated as interdependent parts of a complex system.
Organisational change programmes often fail because they are too focused on one aspect of the
system, commonly technology, and fail to analyse and understand the complex interdependencies that
exist. This is directly analogous to the design of a complex engineering product such as a gas turbine
engine. Just as any change to this complex engineering system has to address the knock-on effects
through the rest of the engine, so too does any change within an organisational system.
There will be few, if any, individuals who understand all the interdependent aspects of how complex
systems work. This is true of complex engineering products and it is equally true of organisational
systems. The implication is that understanding and improvement requires the input of all key
stakeholders, including those who work within different parts of the system. ‘User participation’
thereby is a pre-requisite for systemic understanding and change and, in this perspective, the term
‘user’ is broadly defined to include all key stakeholders.
The potential benefits of such an approach include:
Strong engagement
Reliable and valid data on which to build understanding
A better understanding and analysis of how the system works now (the ‘as is’)
A more comprehensive understanding of how the system may be improved (the ‘to
be’)
Greater chance of successful improvements
The socio-technical perspective originates from pioneering work at the Tavistock Institute and has
been continued on a worldwide basis by key figures such as Harold Leavitt, Albert Cherns, Ken
Eason, Enid Mumford and many others.
Our use of the hexagon draws heavily on the work of Harold, J. Leavitt who viewed organisations as
comprising four key interacting variables, namely task, structure, technology and people (actors).
We have used this systems approach in a wide range of domains including overlapping projects
focused on:
Computer systems
New buildings
New ways of working
New services
Behaviour change
Safety and accidents
Crowd behaviours
Organisational resilience
Sustainability (energy, water and waste)
Green behaviours at work and in the home
Engineering design
Knowledge management
Tele-health
Social networks
Organisational modelling and simulation
Supply chain innovation
Risk analysis
Performance and productivity
Process compliance
A systems perspective is an intellectually robust and useful way of looking at organisations. It speaks
well to our clients and provides a coherent vehicle for collaboration with other disciplines, most
obviously with our engineering colleagues. Our experience is that most of the difficult problems and
exciting opportunities we face in the world lie at the intersections between human behaviour and
engineering innovation. Systems theory provides a useful tool to help us understand and address these
challenges.
Redundancy and Diversity
Redundancy
In engineering, redundancy is the intentional duplication of critical components or functions of a
system with the goal of increasing reliability of the system, usually in the form of a backup or fail-
safe, or to improve actual system performance, such as in the case of GNSS receivers, or multi-
threaded computer processing.
In many safety-critical systems, such as fly-by-wire and hydraulic systems in aircraft, some parts of
the control system may be triplicated,[1] which is formally termed triple modular redundancy (TMR).
An error in one component may then be out-voted by the other two. In a triply redundant system, the
system has three sub components, all three of which must fail before the system fails. Since each one
rarely fails, and the sub components are expected to fail independently, the probability of all three
failing is calculated to be extraordinarily small; it is often outweighed by other risk factors, such
as human error.
processors,
operating systems,
software,
sensors,
types of actuators (electric, hydraulic, pneumatic, manual mechanical, etc.)
communications protocols,
communications hardware,
communications networks,
communications paths
Geographic redundancy
Geographic redundancy corrects the vulnerabilities of redundant devices deployed by geographically
separating backup devices. Geographic redundancy reduces the likelihood of events such as power
outages, floods, HVAC failures, lightning strikes, tornadoes, building fires, wildfires, and mass
shootings would disable the system.
Geographic redundancy locations can be
Dependable Processes
Dependability is a measure of a system's availability, reliability, maintainability, and in some cases,
other characteristics such as durability, safety and security.In real-time computing, dependability is
the ability to provide services that can be trusted within a time-period. The service guarantees must
hold even when the system is subject to attacks or natural failures.
The International Electrotechnical Commission (IEC), via its Technical Committee TC 56 develops
and maintains international standards that provide systematic methods and tools for dependability
assessment and management of equipment, services, and systems throughout their life cycles
Dependability can be broken down into three elements:
Fault: A fault (which is usually referred to as a bug for historic reasons) is a defect in
a system. The presence of a fault in a system may or may not lead to a failure. For
instance, although a system may contain a fault, its input and state conditions may
never cause this fault to be executed so that an error occurs; and thus that particular
fault never exhibits as a failure.
Error: An error is a discrepancy between the intended behavior of a system and its
actual behavior inside the system boundary. Errors occur at runtime when some part
of the system enters an unexpected state due to the activation of a fault. Since errors
are generated from invalid states they are hard to observe without special
mechanisms, such as debuggers or debug output to logs.
Failure: A failure is an instance in time when a system displays behavior that is
contrary to its specification. An error may not necessarily cause a failure, for instance
an exception may be thrown by a system but this may be caught and handled using
fault tolerance techniques so the overall operation of the system will conform to the
specification.
It is important to note that Failures are recorded at the system boundary. They are basically Errors that
have propagated to the system boundary and have become observable. Faults, Errors and Failures
operate according to a mechanism. This mechanism is sometimes known as a Fault-Error-Failure
chain. Once a fault is activated an error is created. An error may act in the same way as a fault in that
it can create further error conditions, therefore an error may propagate multiple times within a system
boundary without causing an observable failure. If an error propagates outside the system boundary a
failure is said to occur. A failure is basically the point at which it can be said that a service is failing to
meet its specification.
Means[edit]
Since the mechanism of a Fault-Error-Chain is understood it is possible to construct means to break
these chains and thereby increase the dependability of a system. Four means have been identified so
far:
Prevention
Removal
Forecasting
Tolerance
Fault Prevention deals with preventing faults being introduced into a system. This can be
accomplished by use of development methodologies and good implementation techniques.
Fault Removal can be sub-divided into two sub-categories: Removal During Development and
Removal During Use.Removal during development requires verification so that faults can be detected
and removed before a system is put into production. Once systems have been put into production a
system is needed to record failures and remove them via a maintenance cycle.
Fault Forecasting predicts likely faults so that they can be removed or their effects can be
circumvented.
Fault Tolerance deals with putting mechanisms in place that will allow a system to still deliver the
required service in the presence of faults, although that service may be at a degraded level.
Persistence
Based on how faults appear or persist, they are classified as:
Transient: They appear without apparent cause and disappear again without apparent cause
Intermittent: They appear multiple times, possibly without a discernible pattern, and
disappear on their own
Permanent: Once they appear, they do not get resolved on their own
Dependability of information systems and survivability
Some works on dependability use structured information systems, e.g. with SOA, to introduce the
attribute survivability, thus taking into account the degraded services that an Information System
sustains or resumes after a non-maskable failure.
The flexibility of current frameworks encourage system architects to enable reconfiguration
mechanisms that refocus the available, safe resources to support the most critical services rather than
over-provisioning to build failure-proof system.
If the formal specification is in operational semantics, the observed behavior of the concrete
system can be compared with the behavior of the specification (which itself should be
executable or simulatable). Additionally, the operational commands of the specification may
be amenable to direct translation into executable code.
If the formal specification is in axiomatic semantics, the preconditions and postconditions of
the specification may become assertions in the executable code.
Verification
Formal verification is the use of software tools to prove properties of a formal specification, or to
prove that a formal model of a system implementation satisfies its specification.Once a formal
specification has been developed, the specification may be used as the basis for proving properties of
the specification, and by inference, properties of the system implementation.
Sign-off verification
Sign-off verification is the use of a formal verification tool that is highly trusted. Such a tool can
replace traditional verification methods (the tool may even be certified).
Human-directed proof
Sometimes, the motivation for proving the correctness of a system is not the obvious need for
reassurance of the correctness of the system, but a desire to understand the system better.
Consequently, some proofs of correctness are produced in the style of mathematical proof:
handwritten (or typeset) using natural language, using a level of informality common to such proofs.
A "good" proof is one that is readable and understandable by other human readers.
Critics of such approaches point out that the ambiguity inherent in natural language allows errors to
be undetected in such proofs; often, subtle errors can be present in the low-level details typically
overlooked by such proofs.
Automated proof
In contrast, there is increasing interest in producing proofs of correctness of such systems by
automated means. Automated techniques fall into three general categories:
Automated theorem proving, in which a system attempts to produce a formal proof from
scratch, given a description of the system, a set of logical axioms, and a set of inference rules.
Model checking, in which a system verifies certain properties by means of an exhaustive
search of all possible states that a system could enter during its execution.
Abstract interpretation, in which a system verifies an over-approximation of a behavioural
property of the program, using a fixpoint computation over a (possibly complete) lattice
representing it.
Some automated theorem provers require guidance as to which properties are "interesting" enough to
pursue, while others work without human intervention. Model checkers can quickly get bogged down
in checking millions of uninteresting states if not given a sufficiently abstract model.
Critics note that some of those systems are like oracles: they make a pronouncement of truth, yet give
no explanation of that truth. There is also the problem of "verifying the verifier"; if the program which
aids in the verification is itself unproven, there may be reason to doubt the soundness of the produced
results. Some modern model checking tools produce a "proof log" detailing each step in their proof,
making it possible to perform, given suitable tools, independent verification.
The main feature of the abstract interpretation approach is that it provides a sound analysis, i.e. no
false negatives are returned. Moreover, it is efficiently scalable, by tuning the abstract domain
representing the property to be analyzed, and by applying widening operators to get fast convergence.
Applications
Formal methods are applied in different areas of hardware and software, including routers, Ethernet
switches, routing protocols, security applications, and operating system microkernels such as seL4.
There are several examples in which they have been used to verify the functionality of the hardware
and software used in DCs. IBM used ACL2, a theorem prover, in the AMD x86 processor
development process. Intel uses such methods to verify its hardware and firmware (permanent
software programmed into a read-only memory). Dansk Datamatik Center used formal methods in the
1980s to develop a compiler system for the Ada programming language that went on to become a
long-lived commercial product.
In software development
In software development, formal methods are mathematical approaches to solving software (and
hardware) problems at the requirements, specification, and design levels. Formal methods are most
likely to be applied to safety-critical or security-critical software and systems, such as avionics
software. Software safety assurance standards, such as DO-178C allows the usage of formal methods
through supplementation, and Common Criteria mandates formal methods at the highest levels of
categorization.
Another approach to formal methods in software development is to write a specification in some form
of logic—usually a variation of first-order logic (FOL)—and then to directly execute the logic as
though it were a program. The OWL language, based on Description Logic (DL), is an example.
There is also work on mapping some version of English (or another natural language) automatically to
and from logic, as well as executing the logic directly. Examples are Attempto Controlled English,
and Internet Business Logic, which do not seek to control the vocabulary or syntax.
Reliability Engineering
Reliability engineering is a sub-discipline of systems engineering that emphasizes the ability of
equipment to function without failure. Reliability describes the ability of a system or component to
function under stated conditions for a specified period of time. Reliability is closely related
to availability, which is typically described as the ability of a component or system to function at a
specified moment or interval of time.
Reliability engineering deals with the prediction, prevention and management of high levels of
lifetime engineering uncertainty and risks of failure. Although stochastic parameters define and affect
reliability, reliability is not only achieved by mathematics and statistics. "Nearly all teaching and
literature on the subject emphasize these aspects, and ignore the reality that the ranges of uncertainty
involved largely invalidate quantitative methods for prediction and measurement.
Objective
1) To apply engineering knowledge and specialist techniques to prevent or to reduce the
likelihood or frequency of failures.
2) To identify and correct the causes of failures that do occur despite the efforts to prevent
them.
3) To determine ways of coping with failures that do occur, if their causes have not been
corrected.
4) To apply methods for estimating the likely reliability of new designs, and for analysing
reliability data.
The reason for the priority emphasis is that it is by far the most effective way of working, in terms of
minimizing costs and generating reliable products. The primary skills that are required, therefore, are
the ability to understand and anticipate the possible causes of failures, and knowledge of how to
prevent them. It is also necessary to have knowledge of the methods that can be used for analysing
designs and data.
Scope and techniques
Reliability engineering for complex systems requires a different, more elaborate systems approach
than for non-complex systems. Reliability engineering may in that case involve:
System availability and mission readiness analysis and related reliability and maintenance
requirement allocation
Functional system failure analysis and derived requirements specification
Inherent (system) design reliability analysis and derived requirements specification for both
hardware and software design
System diagnostics design
Fault tolerant systems (e.g. by redundancy)
Predictive and preventive maintenance (e.g. reliability-centered maintenance)
Human factors / human interaction / human errors
Manufacturing- and assembly-induced failures (effect on the detected "0-hour quality" and
reliability)
Maintenance-induced failures
Transport-induced failures
Storage-induced failures
Use (load) studies, component stress analysis, and derived requirements specification
Software (systematic) failures
Failure / reliability testing (and derived requirements)
Field failure monitoring and corrective actions
Spare parts stocking (availability control)
Technical documentation, caution and warning analysis
Data and information acquisition/organisation (creation of a general reliability development
hazard log and FRACAS system)
Chaos engineering
Effective reliability engineering requires understanding of the basics of failure mechanisms for which
experience, broad engineering skills and good knowledge from many different special fields of
engineering are required, for example:
Tribology
Stress (mechanics)
Fracture mechanics / fatigue
Thermal engineering
Fluid mechanics / shock-loading engineering
Electrical engineering
Chemical engineering (e.g. corrosion)
Material science
Definitions
Reliability may be defined in the following ways:
The idea that an item is fit for a purpose with respect to time
The capacity of a designed, produced, or maintained item to perform as required over time
The capacity of a population of designed, produced or maintained items to perform as
required over time
The resistance to failure of an item over time
The probability of an item to perform a required function under stated conditions for a
specified period of time
The durability of an object
Basics of a reliability assessment
Many engineering techniques are used in reliability risk assessments, such as reliability block
diagrams, hazard analysis, failure mode and effects analysis (FMEA),[12] fault tree
analysis (FTA), Reliability Centered Maintenance, (probabilistic) load and material stress and wear
calculations, (probabilistic) fatigue and creep analysis, human error analysis, manufacturing defect
analysis, reliability testing, etc.
Consistent with the creation of safety cases, for example per ARP4761, the goal of reliability
assessments is to provide a robust set of qualitative and quantitative evidence that use of a component
or system will not be associated with unacceptable risk. The basic steps to take[13] are to:
Thoroughly identify relevant unreliability "hazards", e.g. potential conditions, events, human
errors, failure modes, interactions, failure mechanisms and root causes, by specific analysis or
tests.
Assess the associated system risk, by specific analysis or testing.
Propose mitigation, e.g. requirements, design changes, detection logic, maintenance, training,
by which the risks may be lowered and controlled for at an acceptable level.
Determine the best mitigation and get agreement on final, acceptable risk levels, possibly
based on cost/benefit analysis.
Risk here is the combination of probability and severity of the failure incident (scenario) occurring.
The severity can be looked at from a system safety or a system availability point of view. Reliability
for safety can be thought of as a very different focus from reliability for system availability.
Reliability and availability program plan
Implementing a reliability program is not simply a software purchase; it is not just a checklist of items
that must be completed that will ensure one has reliable products and processes. A reliability program
is a complex learning and knowledge-based system unique to one's products and processes. It is
supported by leadership, built on the skills that one develops within a team, integrated into business
processes and executed by following proven standard work practices.[14]
A reliability program plan is used to document exactly what "best practices" (tasks, methods, tools,
analysis, and tests) are required for a particular (sub)system, as well as clarify customer requirements
for reliability assessment. For large-scale complex systems, the reliability program plan should be a
separate document. Resource determination for manpower and budgets for testing and other tasks is
critical for a successful program. In general, the amount of work required for an effective program for
complex systems is large.
Reliability requirements
For any system, one of the first tasks of reliability engineering is to adequately specify the reliability
and maintainability requirements allocated from the overall availability needs and, more importantly,
derived from proper design failure analysis or preliminary prototype test results. Clear requirements
(able to designed to) should constrain the designers from designing particular unreliable items /
constructions / interfaces / systems. Setting only availability, reliability, testability, or maintainability
targets (e.g., max. failure rates) is not appropriate. This is a broad misunderstanding about Reliability
Requirements Engineering. Reliability requirements address the system itself, including test and
assessment requirements, and associated tasks and documentation.
Reliability culture / human errors / human factors
In practice, most failures can be traced back to some type of human error
Arrhenius model
Eyring model
Inverse power law model
Temperature–humidity model
Temperature non-thermal model
Software reliability
Software reliability is a special aspect of reliability engineering. System reliability, by definition,
includes all parts of the system, including hardware, software, supporting infrastructure (including
critical external interfaces), operators and procedures.
Structural reliability
Structural reliability or the reliability of structures is the application of reliability theory to the
behavior of structures. It is used in both the design and maintenance of different types of structures
including concrete and steel structures.In structural reliability studies both loads and resistances are
modeled as probabilistic variables. Using this approach the probability of failure of a structure is
calculated.
Basic reliability and mission reliability
The above example of a 2oo3 fault tolerant system increases both mission reliability as well as safety.
However, the "basic" reliability of the system will in this case still be lower than a non-redundant
(1oo1) or 2oo2 system. Basic reliability engineering covers all failures, including those that might not
result in system failure, but do result in additional cost due to: maintenance repair actions; logistics;
spare parts etc.
Detectability and common cause failures
When using fault tolerant (redundant) systems or systems that are equipped with protection functions,
detectability of failures and avoidance of common cause failures becomes paramount for safe
functioning and/or mission reliability.
Reliability versus quality (Six Sigma)
Quality often focuses on manufacturing defects during the warranty phase. Reliability looks at the
failure intensity over the whole life of a product or engineering system from commissioning to
decommissioning. Six Sigma has its roots in statistical control in quality of manufacturing. Reliability
engineering is a specialty part of systems engineering.
The everyday usage term "quality of a product" is loosely taken to mean its inherent degree of
excellence. In industry, a more precise definition of quality as "conformance to requirements or
specifications at the start of use" is used.
Reliability operational assessment
Once systems or parts are being produced, reliability engineering attempts to monitor, assess, and
correct deficiencies. Monitoring includes electronic and visual surveillance of critical parameters
identified during the fault tree analysis design stage. Data collection is highly dependent on the nature
of the system. Most large organizations have quality control groups that collect failure data on
vehicles, equipment and machinery.
Reliability organizations
Systems of any significant complexity are developed by organizations of people, such as a
commercial company or a government agency. The reliability engineering organization must be
consistent with the company's organizational structure. For small, non-critical systems, reliability
engineering may be informal.
There are several common types of reliability organizations. The project manager or chief engineer
may employ one or more reliability engineers directly. In larger organizations, there is usually a
product assurance or specialty engineering organization, which may include
reliability, maintainability, quality, safety, human factors, logistics, etc. In such case, the reliability
engineer reports to the product assurance manager or specialty engineering manager.
Equipment costs
Production
Safety
The environment
As you analyze each failure mode, you’ll be able to determine which ones are most important to
prevent.
3. Prioritize preventive maintenance tasks
Once you know the failure modes you need to prevent most, it’s time to prioritize your preventive
maintenance tasks. This step is fairly straightforward, but it does require knowing what tasks are
needed to prevent the most severe failure modes.
You may need to perform a bit of root cause analysis here. In order to avoid wasted PMs, you’ll want
to make sure the tasks you plan actually treat the equipment failures you want to prevent.
4. Optimize MRO inventory management
Your MRO inventory should be stocked with appropriate quantities of the right items. While it is
important to keep inventory costs down—meaning you shouldn’t keep too many items in stock—you
do need to make sure you have enough of each item in stock.
That means analyzing your work order history on each asset and determining what spare parts and
tools are used when, how many parts are needed, and how long it takes to replenish your stock of
those parts.
5. Train your team in best practices
Many equipment failures result from human error, so it’s important to make sure your operators and
maintenance technicians are well versed in best practices. Alongside having operating procedures in
place that maximize equipment availability, train your personnel on following those procedures with
precision.In addition, consider adding checklists to work orders and other documents used by your
personnel.
6. Focus on continuous improvement
As you work on improving reliability in your facility, don’t stop after each step. It’s a continuous
process, and you’ll need to keep working to improve upon each new procedure, practice, and task you
implement.
Be constantly on the lookout for ways to streamline your maintenance and production processes,
improve quality, and eliminate defects. Perform regular audits on your equipment and processes.
Through it, all, keep careful records. Doing so will give you the baseline knowledge you need to keep
moving forward with continuous improvement.
Tip: A CMMS can help you track the condition of your equipment, log work orders, and generate
reports that will help you in the process of continuous improvement.
Relationship between availability and reliability
Generally, availability and reliability go hand in hand, and an increase in reliability usually translates
to an increase in availability. However, it is important to remember that both metrics can produce
different results. Sometimes, you might have a highly available machine that is not reliable or vice
versa.
Take for example a general-purpose motor that is operating close to its maximum capacity. The motor
can run for several hours a day, implying a high availability. However, it needs to stop every half an
hour to resolve operational problems.
Conclusion on availability and reliability
As you focus on improving both availability and reliability in your facility, you’ll help improve the
overall quality and effectiveness of your processes. You’ll see fewer defects, more productivity, and
greater profitability in your facility.
Reliability Requirements
One of the most essential aspects of a reliability program is defining the reliability goals that a product
needs to achieve. This article will explain the proper ways to describe a reliability goal and also
highlight some of the ways reliability requirements are commonly defined improperly.
Designs are usually based on specifications. Reliability requirements are typically part of a technical
specifications document. They can be requirements that a company sets for its product and its own
engineers or what it reports as its reliability to its customers. They can also be requirements set for
suppliers or subcontractors. However, reliability can be difficult to specify.
What are the essential elements of a reliability requirement?
There are many facets to a reliability requirement statement.
Measurable:
Reliability metrics are best stated as probability statements that are measurable by test or analysis
during the product development time frame.
Using constant values. For example: Usage temperature is 25o C. This could be an average
value or, preferably, a high stress value that accommodates most customers and applications.
Using limits. For example: Usage temperature is between -15o C and 40o C.
Using distributions. For example: Usage temperature follows a normal distribution with
mean of 35o C and standard deviation of 5o C.
Using time-dependent profiles. For example: Usage temperature starts at 70o C at t = 0,
increases linearly to 35o C within 3 hours, remains at that level for 10 hours, then increases
exponentially to 50o C within 2 hours and remains at that level for 20 hours. A mathematical
model (function) can be used to describe such profiles.
Time:
Time could mean hours, years, cycles, mileage, shots, actuations, trips, etc. It is whatever is associated
with the aging of the product. For example, saying that the reliability should be 90% would be
incomplete without specifying the time window. The correct way would be to say that, for example,
the reliability should be 90% at 10,000 cycles.
Failure definition:
The requirements should include a clear definition of product failure. The failure can be a complete
failure or degradation of the product. For example: part completely breaks, part cracks, crack length
exceeds 10 mm, part starts shaking, etc. The definition is incorporated into tests and should be used
consistently throughout the analysis.
Confidence:
A reliability requirement statement should be specified with a confidence level, which allows for
consideration of the variability of data being compared to the specification.
The 50th percentile of failures can be computed using the B50 metric.
The time of interest is 10,000 miles. This could be design life, warranty period or whatever
operation/usage time is of interest to you and your customers.
The probability that the product will not fail before 10,000 miles is 90%. Or, there is a
probability that 10% will fail by 10,000 miles.
Although the above two examples (4 and 5) are good metrics, they lack a specification of how much
confidence is to be had in estimating whether the product meets these reliability goals.
Requirement Example 6: 90% Reliability at 10,000 miles with 50% confidence.
Same as above (Example 5) with the following addition:
The lower reliability estimate obtained from your tested sample (or data collected from the
field) is at the 50% confidence level.
This corresponds to the regression line that goes through the data in a regression plot obtained when a
distribution (such as a Weibull) model is fitted to times-to-failure. The line is at 50% confidence. In
other words, this means that there is a 50% chance that your estimated value of reliability is greater
than the true reliability value and there is a 50% chance that it is lower. Using a lower 50%
confidence on reliability is equivalent to not mentioning the confidence level at all!
Let us use the following example to illustrate calculating this reliability requirement.
The two designs are modeled with a Weibull distribution and using rank regression on X as the
parameter estimation method. The following figure shows the probability plot for the two designs.
Requirement Example 7: 90% Reliability for 10,000 miles with 90% confidence.
Same as above (Example 6) with the exception that here, more confidence is required in the reliability
estimate. This statement means that the 90% lower confidence estimate on reliability at 10,000 miles
should be 90%.
Requirement Example 8: 90% Reliability for 10,000 miles with 90% confidence for a 98th percentile
customer.
Same as above (Example 7) with the following addition:
The 98th percentile is a point on the usage stress curve. This describes the stress severity level
for which the reliability is estimated. It means that 98% of the customers who use the product,
or 98% of the range of environmental conditions applied to the product, will experience the
90% reliability.
To be able to estimate reliability at the 98th percentile of the stress level, units would have to be tested
at that stress level or, using accelerated testing methods, the units could be tested at different stress
levels and the reliability could be projected to the 98th percentile of the stress.
Conclusion
As demonstrated in this article, it is important to understand what a reliability requirement actually
means in terms of product performance and to select the metric that will accurately reflect the
expectations of the designers and end-users. The MTTF, MTBF and failure rate metrics are
commonly misunderstood and very often improperly applied.
Fault-tolerant Architectures
Fault tolerance is the property that enables a system to continue operating properly in the event of the
failure of one or more faults within some of its components. If its operating quality decreases at all,
the decrease is proportional to the severity of the failure, as compared to a naively designed system, in
which even a small failure can cause total breakdown.
A fault-tolerant design enables a system to continue its intended operation, possibly at a reduced level,
rather than failing completely, when some part of the system fails. The term is most commonly used
to describe computer systems designed to continue more or less fully operational with, perhaps, a
reduction in throughput or an increase in response time in the event of some partial failure.
Examples
"M2 Mobile Web", the original mobile web front end of Twitter, later served as fallback legacy
version to clients without JavaScript support and/or incompatible browsers until December 2020.
Hardware fault tolerance sometimes requires that broken parts be taken out and replaced with new
parts while the system is still operational (in computing known as hot swapping). Such a system
implemented with a single backup is known as single point tolerant and represents the vast majority of
fault-tolerant systems. In such systems the mean time between failures should be long enough for the
operators to have sufficient time to fix the broken devices (mean time to repair) before the backup
also fails.
Fault tolerance is notably successful in computer applications. Tandem Computers built their entire
business on such machines, which used single-point tolerance to create their NonStop systems
with uptimes measured in years.
Terminology
An example of graceful degradation by design in an image with transparency. Each of the top two
images is the result of viewing the composite image in a viewer that recognises transparency. The
bottom two images are the result in a viewer with no support for transparency. Because the
transparency mask (center bottom) is discarded, only the overlay (center top) remains; the image on
the left has been designed to degrade gracefully, hence is still meaningful without its transparency
information.
A highly fault-tolerant system might continue at the same level of performance even though one or
more components have failed. For example, a building with a backup electrical generator will provide
the same voltage to wall outlets even if the grid power fails.
Single fault condition
A single fault condition is a situation where one means for protection against a hazard is defective. If
a single fault condition results unavoidably in another single fault condition, the two failures are
considered as one single fault condition. A source offers the following example:
A single-fault condition is a condition when a single means for protection against hazard in equipment
is defective or a single external abnormal condition is present, e.g. short circuit between the live parts
and the applied part.
Criteria
Providing fault-tolerant design for every component is normally not an option. Associated redundancy
brings a number of penalties: increase in weight, size, power consumption, cost, as well as time to
design, verify, and test. Therefore, a number of choices have to be examined to determine which
components should be fault tolerant.
How critical is the component? In a car, the radio is not critical, so this component has less
need for fault tolerance.
How likely is the component to fail? Some components, like the drive shaft in a car, are not
likely to fail, so no fault tolerance is needed.
How expensive is it to make the component fault tolerant? Requiring a redundant car
engine, for example, would likely be too expensive both economically and in terms of weight
and space, to be considered.
An example of a component that passes all the tests is a car's occupant restraint system. While we do
not normally think of the primary occupant restraint system, it is gravity. If the vehicle rolls over or
undergoes severe g-forces, then this primary method of occupant restraint may fail. Restraining the
occupants during such an accident is absolutely critical to safety, so we pass the first test. Accidents
causing occupant ejection were quite common before seat belts, so we pass the second test. The cost
of a redundant restraint method like seat belts is quite low, both economically and in terms of weight
and space, so we pass the third test.
Requirements
The basic characteristics of fault tolerance require:
1) No single point of failure – If a system experiences a failure, it must continue to operate
without interruption during the repair process.
2) Fault isolation to the failing component – When a failure occurs, the system must be able
to isolate the failure to the offending component. This requires the addition of dedicated
failure detection mechanisms that exist only for the purpose of fault isolation. Recovery
from a fault condition requires classifying the fault or failing component. The National
Institute of Standards and Technology (NIST) categorizes faults based on locality, cause,
duration, and effect.
3) Fault containment to prevent propagation of the failure – Some failure mechanisms can
cause a system to fail by propagating the failure to the rest of the system. An example of
this kind of failure is the "rogue transmitter" that can swamp legitimate communication in
a system and cause overall system failure.
4) Availability of reversion modes
In addition, fault-tolerant systems are characterized in terms of both planned service outages and
unplanned service outages. These are usually measured at the application level and not just at a
hardware level. The figure of merit is called availability and is expressed as a percentage.
Fault tolerance techniques
Research into the kinds of tolerances needed for critical systems involves a large amount of
interdisciplinary work. The more complex the system, the more carefully all possible interactions
have to be considered and prepared for. Considering the importance of high-value systems in
transport, public utilities and the military, the field of topics that touch on research is very wide: it can
include such obvious subjects as software modeling and reliability, or hardware design, to arcane
elements such as stochastic models, graph theory, formal or exclusionary logic, parallel processing,
remote data transmission, and more.
Replication
Interference with fault detection in the same component. To continue the above passenger
vehicle example, with either of the fault-tolerant systems it may not be obvious to the driver
when a tire has been punctured. This is usually handled with a separate "automated fault-
detection system".
Interference with fault detection in another component. Another variation of this problem
is when fault tolerance in one component prevents fault detection in a different component.
For example, if component B performs some operation based on the output from component
A, then fault tolerance in B can hide a problem with A.
Reduction of priority of fault correction. Even if the operator is aware of the fault, having a
fault-tolerant system is likely to reduce the importance of repairing the fault. If the faults are
not corrected, this will eventually lead to system failure, when the fault-tolerant component
fails completely or when all redundant components have also failed.
Test difficulty. For certain critical fault-tolerant systems, such as a nuclear reactor, there is no
easy way to verify that the backup components are functional. The most infamous example of
this is Chernobyl, where operators tested the emergency backup cooling by disabling primary
and secondary cooling. The backup failed, resulting in a core meltdown and massive release
of radiation.
Cost. Both fault-tolerant components and redundant components tend to increase cost. This
can be a purely economic cost or can include other measures, such as weight. Manned
spaceships, for example, have so many redundant and fault-tolerant components that their
weight is increased dramatically over unmanned systems, which don't require the same level
of safety.
Inferior components. A fault-tolerant design may allow for the use of inferior components,
which would have otherwise made the system inoperable. While this practice has the potential
to mitigate the cost increase, use of multiple inferior components may lower the reliability of
the system to a level equal to, or even worse than, a comparable non-fault-tolerant system.
An organized file and folder structure for images, CSS files, Js files etc.
A system that will allow future adaptability to cross-platforms.
An organized system for reusable code such as menus, headers, functions and classes for
example.
The plan also ensures that the solution is implemented efficiently. Written code is far more
expensive than the overall plan. It costs less to trash a plan than to trash written code.
Make the Code Understandable
For the applications to be sustainable the written code should be easily understandable by any other
adequately experienced programmer, presently or in the future. Many applications are used and re-
adapted long after the coder created them. This involves the way a programmer comments, indents
and writes the code.
Readability
Readability is the ease with which a computer interprets the code to execute it and the programmer
can return to the code when necessary and be able to understand it. In a professional environment
involving a team of programmers, this characteristic is very crucial for the smooth flow of work.
Readability makes deciphering the code very easy.
Indentation
Indentation is the placement of text further to the left or to the right in comparison to the rest of the
text surrounding it. Indentation helps readability. For example, if a complex loop with multiple
decision conditions like if-else are properly indented, it makes it easier for someone to figure out
where each program block begins and ends.
Comments
Commenting of code helps the reader to better read and work through the code and figure out exactly
what is happening at every point in time. Comments are little explanations placed at strategic points in
the code to make things clearer. They usually carry as much information as possible in a short length
of text. Comments should be added even when code seems self explanatory.
Naming Conventions
Proper naming conventions work hand in hand with readability as a best practice. Adopt an easily
understandable naming format for functions, variables and classes names. For e.g. a variable for
storing student's age can be called studentAge, a function to calculate salary can be called
computeSalary
Validations Checks
Validation Checks refer to mechanisms which are incorporated into code to ensure that all input data
(values) conform to that input field's requirements. In other words, the user of an application, for
example, can only enter integers (whole numbers) into a 'credit card number field'. If the user attempts
to use letters or strings (letters mixed with numbers) an error will occur, generating an error message
and the entry will not be accepted.
Optimize Code Efficiency
It is one thing to write a working code but writing code that is efficient and executes quickly takes
additional skills. Efficiency can be achieved by the use of loops, arrays, proper use of boolean
functions, for example. In the following example we will see how a loop is used to improve code
efficiency. A loop is a sequence of instructions that repeatedly executes itself until a particular
condition is met. This helps code execute faster and there are as fewer lines of code as necessary.
Exception Handling
An Exception Handler is a set of code that determines a program's response when an unusual or
unpredictable event occurs which disrupts the normal sequence of its execution. These anomalies
usually occur due to operation system faults. For example, a corrupt drive that holds an application
file the program is attempting to access. The exception handler will generate an error message and the
application will respond accordingly. Exception handling makes sure that the program doesn't end
abruptly with an unknown error.
Reliability Measurement
Reliability refers to the consistency of a measure. Psychologists consider three types of consistency:
over time (test-retest reliability), across items (internal consistency), and across different researchers
(inter-rater reliability).
Test-Retest Reliability
When researchers measure a construct that they assume to be consistent across time, then the scores
they obtain should also be consistent across time. Test-retest reliability is the extent to which this is
actually the case. For example, intelligence is generally thought to be consistent across time. A person
who is highly intelligent today will be highly intelligent next week.
Assessing test-retest reliability requires using the measure on a group of people at one time, using it
again on the same group of people at a later time, and then looking at test-retest correlation between
the two sets of scores. This is typically done by graphing the data in a scatterplot and computing
Pearson’s r.
Again, high test-retest correlations make sense when the construct being measured is assumed to be
consistent over time, which is the case for intelligence, self-esteem, and the Big Five personality
dimensions. But other constructs are not assumed to be stable over time. The very nature of mood, for
example, is that it changes. So a measure of mood that produced a low test-retest correlation over a
period of a month would not be a cause for concern.
Internal Consistency
A second kind of reliability is internal consistency, which is the consistency of people’s responses
across the items on a multiple-item measure. In general, all the items on such measures are supposed
to reflect the same underlying construct, so people’s scores on those items should be correlated with
each other. On the Rosenberg Self-Esteem Scale, people who agree that they are a person of worth
should tend to agree that that they have a number of good qualities.
Like test-retest reliability, internal consistency can only be assessed by collecting and analyzing data.
One approach is to look at a split-half correlation. This involves splitting the items into two sets, such
as the first and second halves of the items or the even- and odd-numbered items. Then a score is
computed for each set of items, and the relationship between the two sets of scores is examined.
Interrater Reliability
Many behavioural measures involve significant judgment on the part of an observer or a rater. Inter-
rater reliability is the extent to which different observers are consistent in their judgments. For
example, if you were interested in measuring university students’ social skills, you could make video
recordings of them as they interacted with another student whom they are meeting for the first time.
Then you could have two or more observers watch the videos and rate each student’s level of social
skills. To the extent that each participant does in fact have some level of social skills that can be
detected by an attentive observer, different observers’ ratings should be highly correlated with each
other. Inter-rater reliability would also have been measured in Bandura’s Bobo doll study.
Validity
Validity is the extent to which the scores from a measure represent the variable they are intended to.
But how do researchers make this judgment? We have already considered one factor that they take
into account—reliability. When a measure has good test-retest reliability and internal consistency,
researchers should be more confident that the scores represent what they are supposed to. There has to
be more to it, however, because a measure can be extremely reliable but have no validity whatsoever.
As an absurd example, imagine someone who believes that people’s index finger length reflects their
self-esteem and therefore tries to measure self-esteem by holding a ruler up to people’s index fingers.
Discussions of validity usually divide it into several distinct “types.” But a good way to interpret these
types is that they are other kinds of evidence—in addition to reliability—that should be taken into
account when judging the validity of a measure. Here we consider three basic kinds: face validity,
content validity, and criterion validity.
Face Validity
Face validity is the extent to which a measurement method appears “on its face” to measure the
construct of interest. Most people would expect a self-esteem questionnaire to include items about
whether they see themselves as a person of worth and whether they think they have good qualities. So
a questionnaire that included these kinds of items would have good face validity.
Face validity is at best a very weak kind of evidence that a measurement method is measuring what it
is supposed to. One reason is that it is based on people’s intuitions about human behaviour, which are
frequently wrong. It is also the case that many established measures in psychology work quite well
despite lacking face validity.
Content Validity
Content validity is the extent to which a measure “covers” the construct of interest. For example, if a
researcher conceptually defines test anxiety as involving both sympathetic nervous system activation
(leading to nervous feelings) and negative thoughts, then his measure of test anxiety should include
items about both nervous feelings and negative thoughts. Or consider that attitudes are usually defined
as involving thoughts, feelings, and actions toward something. By this conceptual definition, a person
has a positive attitude toward exercise to the extent that he or she thinks positive thoughts about
exercising, feels good about exercising, and actually exercises. So to have good content validity, a
measure of people’s attitudes toward exercise would have to reflect all three of these aspects.
Criterion Validity
Criterion validity is the extent to which people’s scores on a measure are correlated with other
variables (known as criteria) that one would expect them to be correlated with. For example, people’s
scores on a new measure of test anxiety should be negatively correlated with their performance on an
important school exam. If it were found that people’s scores were in fact negatively correlated with
their exam performance, then this would be a piece of evidence that these scores really represent
people’s test anxiety. But if it were found that people scored equally well on the exam regardless of
their test anxiety scores, then this would cast doubt on the validity of the measure.
Criteria can also include other measures of the same construct. For example, one would expect new
measures of test anxiety or physical risk taking to be positively correlated with existing measures of
the same constructs. This is known as convergent validity.
Discriminant Validity
Discriminant validity, on the other hand, is the extent to which scores on a measure are not correlated
with measures of variables that are conceptually distinct. For example, self-esteem is a general
attitude toward the self that is fairly stable over time. It is not the same as mood, which is how good or
bad one happens to be feeling right now. So people’s scores on a new measure of self-esteem should
not be very highly correlated with their moods. If the new measure of self-esteem were highly
correlated with a measure of mood, it could be argued that the new measure is not really measuring
self-esteem; it is measuring mood instead.
Safety Engineering
Safety engineering is an engineering discipline that assures that engineered systems provide
acceptable levels of safety. It is strongly related to systems engineering, industrial engineering and the
subset system safety engineering. Safety engineering assures that a life-critical system behaves as
needed, even when components fail.
Safety engineering is a field of engineering that deals with accident prevention, risk of human error
reduction and safety provided by the engineered systems and designs. It is associated with industrial
engineering and system engineering and applied to manufacturing, public works and product designs
to make safety an integral part of operations.
The term safety refers to a condition of being safe or protected. Safety in the context of occupational
health and safety means a state of been protected against physical, psychological, occupational,
mechanical failure, damage, accident, death, injury, or such highly undesirable events. Safety can
therefore be defined as the protection of people from physical injury.
Health and safety are used together to indicate concern for the physical and mental wellbeing of the
individual at work. Safety is also describing as a condition, where positive control of known hazards
exists in an effort to achieve an acceptable degree of calculated risk such as a permissible exposure
limit.
Safety- is freedom from acceptable risk or harm.
Accident -is undesired event giving rise to death, ill health, injury, damage or other loss.
Incident –work related event(s) in which injury or ill health (regardless of severity) or fatality
occurred, or could have occurred.
Risk –the combination of the likelihood of an occurrence of a hazardous event or exposure and the
severity of injury oof the ill health that can be caused by event of the exposure.
Risk assessment-the process of evaluation the risk (S) arising from hazard, taking into account the
adequacy of existing controls and deciding whether or not the risk is acceptable.
Non -conformity- this can be a deviation from work standards, practices, procedures, regulations and
legal requirements
Six steps to safety: these steps are short reminders for safe operation, years of experience have shown
them to be the safest way to perform your daily work.
SAFE PRODUCTION RULES
Safe production rules are developed to reinforce the safety policy and to pursue the objective of zero
harm. They provide a basis for trying to eliminate fatal, serious accidents and occupation health risk
and were formulated through undertaking a historical review of fatal, serious accidents and
occupational hazard in the company.
Reduce accidents
Control and eliminate hazards
Develop new methods and techniques to improve safety
Maximize returns on safety efforts
Maximize public confidence with respect to product safety.
ROLES OF SAFETY ENGINEERS
1.Safety engineers ensure the well-being of people and property.
2.These professionals combine knowledge of an engineering discipline, as well as health or safety
regulations related to their discipline to keep work environments, building and people safe from harm.
3. The work of safety engineers’ helps their employer’s lower costs of insurance and comply with
laws and regulations related to health and safety.
4.Inspections. One of the primary duties of safety engineers is to inspect machinery, equipment and
production facilities to identify potential dangers.
5.Safety engineers are also responsible for making sure that buildings meet all codes, and that
manufacturing equipment, storage facilities and products meet all applicable health and safety
regulations. Fire prevention and industrial safety engineers, in particular, spend a great deal of time
involved with inspection-related activities.
6. Safety engineers are also typically involved in consulting and planning activities. Having a safety
engineer involved from the planning stages of a project enables you to focus on safety as an integral
part of the process, rather than just as something tacked on at the end.
7.When working as consultants, safety engineers bring their education and experience to bear in
analyzing complex processes, conditions and behaviors, and apply a systemic approach to make sure
that nothing has been overlooked. Aerospace safety engineers, product safety engineers, and systems
safety engineers spend a lot of time planning, designing, and consulting.
8.They are involved in doing risk assessment
9. Investigated the causes of accidents, cases of work related diseases or ill health and dangerous
occurrences.
Safety-critical Systems
A safety-critical system (SCS) or life-critical system is a system whose failure or malfunction may
result in one (or more) of the following outcomes:
Circuit breaker
Emergency services dispatch systems
Electricity generation, transmission and distribution
Fire alarm
Fire sprinkler
Fuse (electrical)
Fuse (hydraulic)
Life support systems
Telecommunications
Medicine
Heart-lung machines
Mechanical ventilation systems
Infusion pumps and Insulin pumps
Radiation therapy machines
Robotic surgery machines
Defibrillator machines
Pacemaker devices
Dialysis machines
Devices that electronically monitor vital functions (electrography; especially,
electrocardiography, ECG or EKG, and electroencephalography, EEG)
Medical imaging devices (X-ray, computerized tomography- CT or CAT, different magnetic
resonance imaging- MRI- techniques, positron emission tomography- PET)
Even healthcare information systems have significant safety implications
Recreation
Amusement rides
Climbing equipment
Parachutes
Scuba equipment
o Diving rebreather
o Dive computer (depending on use)
Transport
Put the risk or hazard at the root of the tree and identify the system states that could lead
to that hazard.
Where appropriate, link these with 'and' or 'or' conditions.
A goal should be to minimize the number of single causes of system failure.
Risk reduction
The aim of this process is to identify dependability requirements that specify how the risks should be
managed and ensure that accidents/incidents do not arise. Risk reduction strategies: hazard avoidance;
hazard detection and removal; damage limitation.
‘argument’ – Above all, the safety case exists to communicate an argument. It is used to
demonstrate how someone can reasonably conclude that a system is acceptably safe from the
evidence available.
‘clear’ – A safety case is a device for communicating ideas and information, usually to a third
party (e.g. a regulator). In order to do this convincingly, it must be as clear as possible.
‘system’ – The system to which a safety case refers can be anything from a network of pipes or a
software configuration to a set of operating procedures. The concept is not limited to
consideration of conventional engineering ‘design’.
‘acceptably’ – Absolute safety is an unobtainable goal. Safety cases are there to convince
someone that the system is safe enough.
‘context’ – Context-free safety is impossible to argue. Almost any system can be unsafe if used in
an inappropriate or unexpected manner. It is part of the job of the safety case to define the context
within which safety is to be argued.
A safety case is a comprehensive and structured set of safety documentation which is aimed to ensure
that the safety of a specific vessel or equipment can be demonstrated by reference to:
The safety argument is that which communicates the relationship between the evidence and
objectives. Based on the author’s personal experience, gained from reviewing a number of safety
cases, and validated through discussion with many safety practitioners, a commonly observed failing
of safety cases is that the role of the safety argument is often neglected. In such safety cases, many
pages of supporting evidence are often presented but little is done to explain how this evidence relates
to the safety objectives. The reader is often left to guess at an unwritten and implicit argument.
Both argument and evidence are crucial elements of the safety case that must go hand-in-hand.
Argument without supporting evidence is unfounded, and therefore unconvincing. Evidence without
argument is unexplained it can be unclear that (or how) safety objectives have been satisfied. In the
following section we examine how safety arguments may be clearly communicated within safety case
reports.
SAFETY CASE DEVELOPMENT LIFECYCLE
It is increasingly recognised by both safety case practitioners and many safety standards that safety
case development, contrary to what may historically have been practised, cannot be left as an activity
to be performed towards the end of the safety lifecycle. This view of safety case production being left
until all analysis and development is completed.
Large amounts of re-design resulting from a belated realisation that a satisfactory safety argument
cannot be constructed. In extreme cases, this has resulted in ‘finished’ products having to be
completely discarded and redeveloped.
Less robust safety arguments being presented in the final safety case. Safety case developers are
forced to argue over a design as it is given to them – rather than being able to influence the design
in such a way as to improve safety and improve the nature of the safety argument. This can result
in, for example, probabilistic arguments being relied upon more heavily than deterministic
arguments based upon explicit design features (the latter being often more convincing).
Lost safety rationale. The rationale concerning the safety aspects of the design is best recorded at
‘design-time’. Where capture of the safety argument is left until after design and implementation
– it is possible to lose some of the safety aspects of the design decision making process which, if
available, could strengthen the final safety case.
PRELIMINARY SAFETY ARGUMENTS
Security Engineering
Security engineering is about building systems to remain dependable in the face of malice, error, or
mischance. As a discipline, it focuses on the tools, processes, and methods needed to design,
implement, and test complete systems, and to adapt existing systems as their environment evolves.
Security engineering must start early in the application deployment process. In fact, each step in the
application deployment should be started early security planning, securing the system, developing the
system with security, and testing the system with security. The security of a system can be threatened
via two violations:
Threat: A program that has the potential to cause serious damage to the system.
Attack: An attempt to break security and make unauthorized use of an asset.
Security violations affecting the system can be categorized as malicious and accidental
threats. Malicious threats, as the name suggests are a kind of harmful computer code or web script
designed to create system vulnerabilities leading to back doors and security breaches. Accidental
Threats, on the other hand, are comparatively easier to be protected against. Example: Denial of
Service DDoS attack. Security can be compromised via any of the breaches mentioned:
2. Port Scanning:
It is a means by which the cracker identifies the vulnerabilities of the system to attack. It is an
automated process that involves creating a TCP/IP connection to a specific port. To protect the
identity of the attacker, port scanning attacks are launched from Zombie Systems, that is systems that
were previously independent systems that are also serving their owners while being used for such
notorious purposes.
Denial of Service:
Such attacks aren’t aimed for the purpose of collecting information or destroying system files. Rather,
they are used for disrupting the legitimate use of a system or facility. These attacks are generally
network-based. They fall into two categories:
Attacks in this first category use so many system resources that no useful work can be
performed.
Attacks in the second category involve disrupting the network of the facility. These attacks
are a result of the abuse of some fundamental TCP/IP principles.
Security Measures Taken
Physical:
The sites containing computer systems must be physically secured against armed and
malicious intruders. The workstations must be carefully protected.
Human:
Only appropriate users must have the authorization to access the system. Phishing and
Dumpster Diving must be avoided.
Operating system:
The system must protect itself from accidental or purposeful security breaches.
Networking System:
Almost all of the information is shared between different systems via a network. Intercepting
these data could be just as harmful as breaking into a computer. Henceforth, Network should
be properly secured against such attacks.
Safety and Organizations
1. Health and safety executives
2. Institute of Occupational Safety and Health (IOSH)
3. NEBOSH
4. National Safety Council
5. National Institute for Occupational Safety and Health
6. Health and Safety Authority
7. Occupational safety and Health Administration (OSHA)
8. European Agency for Safety and Health at work
9. Safe work Australia
10. British Safety Council
11. Occupation Safety and Health Consultants Register
12. National Compliance and Risk Qualification
13. Canadian Center for Occupational Health and Safety
14. Occupational Safety and Health Review Commission
15. Mines safety and Health Administration
16. State Administration for work safety
17. Korea Occupational Safety and Health Agency
18. Board of Canadian Registered Safety Professionals
19. American Society of Safety Professionals
Health and safety executives:
The Health and Safety Executive is the body responsible for the encouragement, regulation and
enforcement of workplace health, safety and welfare, and for research into occupational risks in Great
Britain. It is a non-departmental public body of the United Kingdom with its headquarters.
Institute of Occupational Safety and Health (IOSH):
The Institution of Occupational Safety and Health (IOSH) is the world’s leading professional body for
people responsible for safety and health in the workplace. OSH acts as a champion, supporter, adviser,
advocate and trainer for safety and health professionals working in organisations of all sizes. We give
the safety and health profession a consistent, independent, authoritative voice at the highest levels.
NEBOSH:
National Examination Board in Occupational Safety and Health is a UK-based independent
examination board delivering vocational qualifications in health, safety & environmental practice and
management. It was founded in 1979 and has charitable status.
National Safety Council:
The National Safety Council is a 501 nonprofit, public service organization promoting health and
safety in the United States of America. Headquartered in Itasca, Illinois, NSC is a member
organization, founded in 1913 and granted a congressional charter in 1953.
National Institute for Occupational Safety and Health:
The National Institute for Occupational Safety and Health is the United States federal agency
responsible for conducting research for the prevention of work-related injury and illness.
Health and Safety Authority:
The Health and Safety Authority is the national body in Ireland with responsibility for occupational
health and safety. Its role is to secure health and safety at work.
Occupational safety and Health Administration (OSHA):
The Occupational Safety and Health Administration is an agency of the United States Department of
Labor. Congress established the agency under the Occupational Safety and Health Act, which
President Richard M. Nixon signed into law on December 29, 1970.
European Agency for Safety and Health at work:
The European Agency for Safety and Health at Work is a decentralised agency of the European Union
with the task of collecting, analysing and disseminating relevant information that can serve the needs
of people involved in safety and health at work.
Safe work Australia:
SWA is an Australian government statutory body established in 2008 to develop national policy
relating to WHS and workers’ compensation. We are jointly funded by the Commonwealth, state and
territory governments through an Intergovernmental Agreement. We perform our functions in
accordance with our Corporate plan and Operational plan, which are agreed annually by Ministers for
Work Health and Safety.
British Safety Council:
The British Safety Council, a Registered Charity founded by James Tye in 1957, is one of the world’s
leading Health and Safety organisations alongside the likes of IOSH & IIRSM unlike these the
Council’s members are mostly companies.
Occupation Safety and Health Consultants Register:
The Occupational Safety and Health Consultants Register (OSHCR) is a public register of UK-based
health and safety advice consultants, set up to assist UK employers and business owners with general
advice on workplace health and safety issues. The register was established in response to the health
and safety consultants should professional bodies and a web-based directory established.
National Compliance and Risk Qualification:
National Compliance and Risk Qualifications – NCRQ – has been established by a number of leading
experts in health and safety. This includes representatives of some of the UK’s largest employers,
including the BBC, Royal Mail, Siemens plc, and local authorities, specialists from the Health and
Safety Executive, legal experts, and academics.
Canadian Center for Occupational Health and Safety:
The Canadian Centre for Occupational Health and Safety (CCOHS) is an independent departmental
corporation under Schedule II of the Financial Administration Act and is accountable to Parliament
through the Minister of Labour. CCOHS functions as the primary national agency in Canada for the
advancement of safe and healthy workplaces and preventing work-related injuries, illnesses and
deaths.
Occupational Safety and Health Review Commission:
The Occupational Safety and Health Review Commission (OSHRC) is an independent federal agency
created under the Occupational Safety and Health Act to decide contests of citations or penalties
resulting from OSHA inspections of American work places.
Mines safety and Health Administration
The Mine Safety and Health Administration (MSHA) is an agency of the United States Department of
Labor which administers the provisions of the Federal Mine Safety and Health Act of 1977 (Mine
Act) to enforce compliance with mandatory safety and health standards as a means to eliminate fatal
accidents, to reduce the frequency and severity of nonfatal accidents, to minimize health hazards, and
to promote improved safety and health conditions in the nation’s mines
State Administration for work safety:
The State Administration of Work Safety, reporting to the State Council, is the non-ministerial agency
of the Government of the People’s Republic of China responsible for the regulation of risks to
occupational safety and health in China.
Korea Occupational Safety and Health Agency:
Korea Occupational Safety & Health Agency is a body in South Korea, which serves to protect the
health and safety of Korean workers. It was late 1980s that KOSHA (Korea Occupational Safety &
Health Agency) Law was released to the public. After the KOSHA Act was released in 1986, the
labor department of Korea, which is the competent organization of KOSHA, moved to the next step
that set up the plan for establishing KOSHA and inaugurated the institution committee for KOSHA.
Board of Canadian Registered Safety Professionals:
The Board of Canadian Registered Safety Professionals provides certification of occupational health
and safety professionals in Canada and has an established Code of Ethics.
American Society of Safety Professionals:
The American Society of Safety Professionals is a global association for occupational safety and
health professionals. For more than 100 years, the association have supported occupational safety and
health (OSH) professionals in their efforts to prevent workplace injuries, illnesses and fatalities. We
provide education, advocacy, standards development and a professional community to our members
in order to advance their careers and the OSH profession as a whole.
Security Requirements
A security requirement is a statement of needed security functionality that ensures one of many
different security properties of software is being satisfied. Security requirements are derived from
industry standards, applicable laws, and a history of past vulnerabilities. Security requirements define
new features or additions to existing features to solve a specific security problem or eliminate a
potential vulnerability.
Security requirements provide a foundation of vetted security functionality for an application. Instead
of creating a custom approach to security for every application, standard security requirements allow
developers to reuse the definition of security controls and best practices. Those same vetted security
requirements provide solutions for security issues that have occurred in the past. Requirements exist
to prevent the repeat of past security failures.
Type of security requirements:
Security requirements can be formulated on different abstraction levels. At the highest abstraction
level, they basically just reflect security objectives. An example of a security objectives could be "The
system must maintain the confidentially of all data that is classified as confidential".
More useful for a SW architect or a system designer are however security requirements that describe
more concretely what must be done to assure the security of a system and its data. There are 4
different security requirement types:
Secure Functional Requirements, this is a security related description that is integrated into
each functional requirement. Typically, this also says what shall not happen. This requirement
artifact can for example be derived from misuse cases
Functional Security Requirements, these are security services that needs to be achieved by
the system under inspection. Examples could be authentication, authorization, backup, server-
clustering, etc. This requirement artifact can be derived from best practices, policies, and
regulations.
Non-Functional Security Requirements, these are security related architectural
requirements, like "robusteness" or "minimal performance and scalability". This requirement
type is typically derived from architectural principals and good practice standards.
Secure Development Requirements, these requirements describe required activities during
system development which assure that the outcome is not subject to vulnerabilities. Examples
could be "data classification", "coding guidelines" or "test methodology". These requirements
are derived from corresponding best practice frameworks like "CLASP".
Implementation
Successful use of security requirements involves four steps. The process includes discovering /
selecting, documenting, implementing, and then confirming correct implementation of new security
features and functionality within an application.
Discovery and Selection
The process begins with discovery and selection of security requirements. In this phase, the developer
is understanding security requirements from a standard source such as ASVS and choosing which
requirements to include for a given release of an application. The point of discovery and selection is
to choose a manageable number of security requirements for this release or sprint, and then continue
to iterate for each sprint, adding more security functionality over time.
Investigation and Documentation
During investigation and documentation, the developer reviews the existing application against the
new set of security requirements to determine whether the application currently meets the requirement
or if some development is required. This investigation culminates in the documentation of the results
of the review.
Test
Test cases should be created to confirm the existence of the new functionality or disprove the
existence of a previously insecure option.
Vulnerabilities Prevented
Security requirements define the security functionality of an application. Better security built in from
the beginning of an applications life cycle results in the prevention of many types of vulnerabilities.
The image above shows the security mechanisms at work when a user is accessing a web-based
application. Common security concerns of a software system or an IT infrastructure system still
revolves around the CIA triad as described in the previous section.
When designing a system, we first need to see the general architecture of the system that should be
implemented for the business requirements to be fulfilled.
The above image shows the general architecture of a microservices-based web application a common
approach for today’s HTTP-based web applications and services.
Suppose we’re designing a microservices based system and trying to plan for the system security from
the architecture design. We started by performing a risk assessment to see which parts of the system
have the highest risk. The system consists of an API gateway, an authentication service, a user
configuration service, a payment service, and a transaction service.
The five services serve as different components and functions of the system, each carries their own
risks. But let’s focus on the service that serves as the front-line defense of the system: the API
gateway.
The API gateway is the one accepting requests directly from the public Internet, and the machine it’s
deployed on is at more risk to be compromised by an attacker compared to the other services deployed
on machines that are not directly exposed to the public Internet. The API gateway will need to parse
and process request securely so attackers wouldn’t be able to exploit the request parser by sending a
disfigured HTTP request.
Disfigured request that’s not properly handled may cause the API gateway to crash or to be
manipulated to execute instructions it’s not supposed to execute. It’s a good idea to put the API
gateway behind a firewall that can help filter out malicious requests and stop exploit attempts before
they reach the API gateway — but the firewall itself might be exploitable, so pick something that’s
already battle-proven and quick to patch whenever a vulnerability is found.
While the other services also have their own risks we should handle, the API gateway and the
authentication service are to be prioritized due to the higher risks they pose to the whole system if
compromised.
By putting API gateway as the front-line — with some extra protection such as firewall rules — we
can avoid exposing every service from direct access. Since only the API gateway is hit with traffic
directly from the public Internet, we can focus on securing the API gateway from any risks involving
disfigured requests and ensuring the requests forwarded by the API gateway to each respective service
are already safe.
Imagine if we let every single service to be directly accessible from the public Internet. We’ll need to
ensure every single one of them has the same standards for implementation security regarding how to
handle raw requests. This setup would be much more expensive to maintain as the number of services
we have increases, as we need to secure every single one of them instead of just one key service that
acts as a bridge between the public Internet and services in the internal network.
Poorly-planned system security on architectural level would leave us with the extra work of securing
many things that we shouldn’t even bother with, if only we designed the system architecture properly
from the start.
Building a secure system is not easy, and there will never be enough resource to make a system
perfectly secure. But by performing a risk assessment on the system we’re trying to secure, we’ll be
able to identify which parts of the system need to be prioritized.
The risk assessment approach can be used for performing a security assessment on an existing system,
but it’s also useful when we’re trying to design a system from scratch. By applying the principles to
our system architecture design and adding mechanisms to mitigate possible issues, we can avoid
possible severe risks in the system from the start.
Even for a system that’s designed with security in mind at the beginning, the system will grow more
and more complex as time goes on. The complexity will add more risks to the system, as a more
complex system’s behaviors tend to be more unpredictable. We can manage the system’s complexity
by performing some system maintenance tasks by restructuring parts of the system in order to
simplify the overall design and interaction between components, and also removing parts that are no
longer used.
Security Testing and Assurance
Security Testing is a type of Software Testing that uncovers vulnerabilities, threats, risks in a software
application and prevents malicious attacks from intruders. The purpose of Security Tests is to identify
all possible loopholes and weaknesses of the software system which might result in a loss of
information, revenue, repute at the hands of the employees or outsiders of the Organization.
The main goal of Security Testing is to identify the threats in the system and measure its potential
vulnerabilities, so the threats can be encountered and the system does not stop functioning or can not
be exploited. It also helps in detecting all possible security risks in the system and helps developers to
fix the problems through coding.
Types of Security Testing
Vulnerability Scanning: This is done through automated software to scan a system against
known vulnerability signatures.
Security Scanning: It involves identifying network and system weaknesses, and later
provides solutions for reducing these risks. This scanning can be performed for both Manual
and Automated scanning.
Penetration testing: This kind of testing simulates an attack from a malicious hacker. This
testing involves analysis of a particular system to check for potential vulnerabilities to an
external hacking attempt.
Risk Assessment: This testing involves analysis of security risks observed in the
organization. Risks are classified as Low, Medium and High. This testing recommends
controls and measures to reduce the risk.
Security Auditing: This is an internal inspection of Applications and Operating systems for
security flaws. An audit can also be done via line by line inspection of code
Ethical hacking: It’s hacking an Organization Software systems. Unlike malicious hackers,
who steal for their own gains, the intent is to expose security flaws in the system.
Posture Assessment: This combines Security scanning, Ethical Hacking and Risk
Assessments to show an overall security posture of an organization.
How to do Security Testing
It is always agreed, that cost will be more if we postpone security testing after software
implementation phase or after deployment. So, it is necessary to involve security testing in the SDLC
life cycle in the earlier phases.
SDLC Phases Security Processes
Coding and Unit Static and Dynamic Testing and Security White Box Testing
Testing
Tiger Box: This hacking is usually done on a laptop which has a collection of OSs and
hacking tools. This testing helps penetration testers and security testers to conduct
vulnerabilities assessment and attacks.
Black Box: Tester is authorized to do testing on everything about the network topology and
the technology.
Grey Box: Partial information is given to the tester about the system, and it is a hybrid of
white and black box models.
Security Testing Tool
1) Acunetix
Intuitive and easy to use, Acunetix by Invicti helps small to medium-sized organizations ensure their
web applications are secure from costly data breaches. It does so by detecting a wide range of web
security issues and helping security and development professionals act fast to resolve them.
2) Intruder
Intruder is a powerful, automated penetration testing tool that discovers security weaknesses across
your IT environment. Offering industry-leading security checks, continuous monitoring and an easy-
to-use platform, Intruder keeps businesses of all sizes safe from hackers.
3) Owasp
The Open Web Application Security Project (OWASP) is a worldwide non-profit organization
focused on improving the security of software. The project has multiple tools to pen test various
software environments and protocols.
4) WireShark
Wireshark is a network analysis tool previously known as Ethereal. It captures packet in real time and
display them in human readable format. Basically, it is a network packet analyzer- which provides the
minute details about your network protocols, decryption, packet information, etc. It is an open source
and can be used on Linux, Windows, OS X, Solaris, NetBSD, FreeBSD and many other systems. The
information that is retrieved via this tool can be viewed through a GUI or the TTY mode TShark
Utility.
5) W3af
w3af is a web application attack and audit framework. It has three types of plugins; discovery, audit
and attack that communicate with each other for any vulnerabilities in site, for example a discovery
plugin in w3af looks for different url’s to test for vulnerabilities and forward it to the audit plugin
which then uses these URL’s to search for vulnerabilities.
Security Assurance
1. Security Hardening
2. Security Testing
3. Vulnerability Management
Security Hardening
Security hardening describes the minimization of a system’s attack surface and proper configuration
of security functions. The former may be achieved by disabling unnecessary components, removing
superfluous system accounts, and closing any communication interfaces not in use – just to name a
few. The latter configuration task focuses on security controls within the system itself and ensures that
these can perform their functions as intended. This can include the configuration of host-based
firewalls, intrusion detection/ prevention capabilities, or operating system controls, such as SELinux.
Security hardening is particularly important before a system is deployed, but should be verified
regularly thereafter to confirm that the system still meets the defined hardening standard in the context
of its current operating environment.
Security Testing
Security testing aims to validate a system’s security posture by trying to identify any weaknesses or
vulnerabilities possibly remaining after security hardening. This activity can take many different
forms, depending on the complexity of the system under test and the available resources and skills. In
its most basic form, it may comprise an automated vulnerability scan from the outside as well as an
authenticated scan from the perspective of a user on the system. More advanced tests would go a step
further by analyzing the system’s responses and reasoning about communication flows that may
afford an attacker with a way into the system. Established best practices, such as the OWASP Top 10,
can serve as a useful guide here to focus the test activities on the most common vulnerabilities.
Beyond that, fully manual test could dig even deeper, for example, trying to discover vulnerabilities in
the systems source code if available.
Similar to hardening of the system, security testing should also be performed before and during a
systems operation. Regular, automated security scans can be a great tool to identify new
vulnerabilities early on.
Vulnerability Management
Vulnerability management takes the results of the security tests performed and attempts to mitigate
them. This includes the analysis of each finding (Is this actually an issue in the context of this
system?), prioritization (How big of an issue is it?), and mitigation (How can it be fixed?). While the
last part should be fairly obvious, the first two are just as essential since it is important to take a risk-
based approach to vulnerability mitigation. No system will ever be completely free of vulnerabilities,
but the goal should be to avoid the ones that are critical and easily abusable.
We hope with this article we were able to provide you with a good overview of security assurance.
Please note, the term is not strictly defined, so in some organizations further aspects may be
considered part of it, such as a secure software development process. Let us know what other security
basics you would like us to cover or which we should explore in more detail.
Resilience Engineering
resilience is the ability to absorb or avoid damage without suffering complete failure and is an
objective of design, maintenance and restoration for buildings and infrastructure, as well as
communities. A more comprehensive definition is that it is the ability to respond, absorb, and adapt
to, as well as recover in a disruptive event. A resilient structure/system/community is expected to be
able to resist to an extreme event with minimal damages and functionality disruptions during the
event; after the event, it should be able to rapidly recovery its functionality similar to or even better
than the pre-event level.
The concept of resilience originated from engineering and then gradually applied to other fields. It is
related to that of vulnerability. Both terms are specific to the event perturbation, meaning that a
system/infrastructure/community may be more vulnerable or less resilient to one event than another
one. However, they are not the same. One obvious difference is that vulnerability focuses on the
evaluation of system susceptibility in the pre-event phase; resilience emphasizes the dynamic features
in the pre-event, during-event, and post-event phases.
Resilience is a multi-facet property, covering four dimensions: technical, organization, social and
economic. Therefore, using one metric may not be representative to describe and quantify resilience.
In engineering, resilience is characterized by four Rs: robustness, redundancy, resourcefulness, and
rapidity. Current research studies have developed various ways to quantify resilience from multiple
aspects, such as functionality- and socioeconomic- related aspects.
Engineering resilience has inspired other fields and influenced the way how they interpret resilience,
e.g. supply chain resilience.
Engineering resilience refers to the functionality of a system in relation to hazard mitigation. Within
this framework, resilience is calculated based on the time it takes a system to return to a single state
equilibrium. Researchers at the MCEER (Multi-Hazard Earthquake Engineering research center) have
identified four properties of resilience: Robustness, resourcefulness, redundancy and rapidity.
Robustness: the ability of systems to withstand a certain level of stress without suffering loss of
function.
Resourcefulness: the ability to identify problems and resources when threats may disrupt the
system.
Redundancy: the ability to have various paths in a system by which forces can be transferred to
enable continued function
Rapidity: the ability to meet priorities and goals in time to prevent losses and future disruptions.
How to build in resilience
React to failures. When errors occur, teams respond to them. When a failure occurs and there is no
response, you are not adapting.
Log correctly. It is easiest to treat failures when their cause is known. Building good logging reports
into the application can help identify errors quickly, allowing tech/support staff to easily handle and
treat the errors. Good logging is critical to root cause analysis.
Check your metrics. Building resiliency should consider important metrics like mean time to failure
(MTTF) and mean time to recovery (MTTR) in order isolate impacted components and restore
optimal performance.
Know your options. Backup plans are illustrative of preparedness, not paranoia. When plan A fails,
and your company already has plans B, C, and D in place, your ability to respond to the failure
increases greatly.
Resilience in Cloud Computing
Resilience computing is a form of computing that distributes redundant IT resources for operational
purposes. In this computing, IT resources are pre-configured so that these sources are needed at
processing time; Can be used in processing without interruption.
The characteristic of flexibility in cloud computing can refer to redundant IT resources within a single
cloud or across multiple clouds. By taking advantage of the flexibility of cloud-based IT services,
cloud consumers can improve both the efficiency and availability of their applications.
Fixes and continues operation. Cloud Resilience is a term used to describe the ability of servers,
storage systems, data servers, or entire networks to remain connected to the network without
interfering with their functions or losing their operational capabilities. For a cloud system to remain
resilient, it needs to cluster the servers, has redundant workloads, and even rely on multiple physical
servers. High-quality products and services will accomplish this task.
Complex systems
A recurring theme in resilience engineering is about reasoning holistically about systems, as opposed
to breaking things up into components and reasoning about components separately. This perspective
is known as systems thinking, which is a school of thought that has been influential in the resilience
engineering community.
When you view the world as a system, the idea of cause becomes meaningless, because there’s no
way to isolate an individual cause. Instead, the world is a tangled web of influences.
You’ll often hear the phrase socio-technical system. This language emphasizes that systems should be
thought of as encompassing both humans and technologies, as opposed to thinking about
technological aspects in isolation.
Five Pillars of Resilience Engineering
Monitoring and Visibility
It’s critical to implement constant monitoring to ensure your team can act quickly in the case of an
emergency. You have to monitor at the application level, identify your critical user flows, and ensure
you create synthetic transactions and heuristics monitoring to identify signs of disruption before the
experience for your customers starts to degrade.
One way you can challenge your engineers to prepare for the unknown is through regular games and
testing opportunities like SRT (site reliability testing) and outage simulations. In these games, we
divide the team in half. One team is tasked with understanding how to monitor several metrics of the
new technology to ensure it’s working correctly and to take manual action if needed to restore service
when a disruption is identified. The other team will purposely introduce several disruption modes and
monitor how they affect the system. It’s okay and even encouraged to push teams over the edge,
forcing them to reassess themselves and learn for next time.
A “Redundancy is King” Attitude
To ensure resilience engineering, it’s critical to have no single point of failure and proactively prepare
for where you might need “backup.” This can look like multiple cells supported by several servers and
all backed by different data centers. When you send your credentials to authenticate, if one subsystem
isn’t working, you can redirect to another, so the authentication works and appears seamless to the
end-user. We’ve spent a lot of time understanding failure modes and making sure our architecture can
immediately work around those modes.
A “No Mysteries” Mindset
Embracing a “no mystery” culture comes down to being willing and motivated to find the root cause
of any issue that happens in your production system, no matter the complexity. Every engineer must
maintain a mindset of curiosity and exploration and never settle for not knowing.
Strong Automation
Automation is an absolute requirement, but the only thing worse than having no automation at all is
having bad automation. A bug in your automation can take an entire system down faster than a human
can restore it and bring it back to operation.
The key to implementing effective automation is to treat it as production software, meaning strong
software development principles should apply. Even if your automation starts as a small number of
scripts, you need to consider a release cycle, testing automation, deployment, and rollback procedures.
This may seem overkill for your team initially, but your whole system will eventually depend on your
automation making the right decisions and having no bugs when executing. It’s hard to retrofit good
SDLC processes for your automation if they’re not incorporated from the beginning.
The Right Team
An organization that practices and prioritizes resilience engineering starts with its people. Long gone
are the days when an engineer would write software and then pass it off for someone else to test it and
run it. Today, every engineer today is responsible for ensuring their software is robust, reliable, and
always on. Resiliency engineering is hard and requires a lot of passionate engineers, so make sure you
reward and recognize your team; ensure they know you understand the complexity of the challenges.
This takes a cultural shift and starts with who you hire. When you’re interviewing, ensure you hire
people who are proud of what they’ve built in previous roles and who get satisfaction from solving
tough problems while keeping a product running.
Cybersecurity
The technique of protecting internet-connected systems such as computers, servers, mobile devices,
electronic systems, networks, and data from malicious attacks is known as cybersecurity. We can
divide cybersecurity into two parts one is cyber, and the other is security. Cyber refers to the
technology that includes systems, networks, programs, and data. And security is concerned with the
protection of systems, networks, applications, and information. In some cases, it is also
called electronic information security or information technology security.
Cybersecurity is the protection of Internet-connected systems, including hardware, software, and data
from cyber attackers. It is primarily about people, processes, and technologies working together to
encompass the full range of threat reduction, vulnerability reduction, deterrence, international
engagement, and recovery policies and activities, including computer network operations, information
assurance, law enforcement, etc.
Cyber-attack is now an international concern. It has given many concerns that could endanger the
global economy. As the volume of cyber-attacks grows, companies and organizations, especially
those that deal with information related to national security, health, or financial records, need to take
steps to protect their sensitive business and personal information.
Types of Cyber Security
Network Security: It involves implementing the hardware and software to secure a computer
network from unauthorized access, intruders, attacks, disruption, and misuse. This security
helps an organization to protect its assets against external and internal threats.
Application Security: It involves protecting the software and devices from unwanted threats.
This protection can be done by constantly updating the apps to ensure they are secure from
attacks. Successful security begins in the design stage, writing source code, validation, threat
modeling, etc., before a program or device is deployed.
Information or Data Security: It involves implementing a strong data storage mechanism to
maintain the integrity and privacy of data, both in storage and in transit.
Identity management: It deals with the procedure for determining the level of access that
each individual has within an organization.
Operational Security: It involves processing and making decisions on handling and securing
data assets.
Mobile Security: It involves securing the organizational and personal data stored on mobile
devices such as cell phones, computers, tablets, and other similar devices against various
malicious threats. These threats are unauthorized access, device loss or theft, malware, etc.
Cloud Security: It involves in protecting the information stored in the digital environment or
cloud architectures for the organization. It uses various cloud service providers such as AWS,
Azure, Google, etc., to ensure security against multiple threats.
Disaster Recovery and Business Continuity Planning: It deals with the processes,
monitoring, alerts, and plans to how an organization responds when any malicious activity is
causing the loss of operations or data. Its policies dictate resuming the lost operations after
any disaster happens to the same operating capacity as before the event.
User Education: It deals with the processes, monitoring, alerts, and plans to how an
organization responds when any malicious activity is causing the loss of operations or data.
Its policies dictate resuming the lost operations after any disaster happens to the same
operating capacity as before the event.
Why is Cyber Security important?
Today we live in a digital era where all aspects of our lives depend on the network, computer and
other electronic devices, and software applications. All critical infrastructure such as the banking
system, healthcare, financial institutions, governments, and manufacturing industries use devices
connected to the Internet as a core part of their operations. Some of their information, such as
intellectual property, financial data, and personal data, can be sensitive for unauthorized access or
exposure that could have negative consequences. This information gives intruders and threat actors to
infiltrate them for financial gain, extortion, political or social motives, or just vandalism.
Cyber-attack is now an international concern that hacks the system, and other security attacks could
endanger the global economy. Therefore, it is essential to have an excellent cybersecurity strategy to
protect sensitive information from high-profile security breaches. Furthermore, as the volume of
cyber-attacks grows, companies and organizations, especially those that deal with information related
to national security, health, or financial records, need to use strong cybersecurity measures and
processes to protect their sensitive business and personal information.
Cyber Security Goals
Cyber Security's main objective is to ensure data protection. The security community provides a
triangle of three related principles to protect the data from cyber-attacks. This principle is called the
CIA triad. The CIA model is designed to guide policies for an organization's information security
infrastructure. When any security breaches are found, one or more of these principles has been
violated.
We can break the CIA model into three parts: Confidentiality, Integrity, and Availability. It is actually
a security model that helps people to think about various parts of IT security.
Confidentiality
Confidentiality is equivalent to privacy that avoids unauthorized access of information. It involves
ensuring the data is accessible by those who are allowed to use it and blocking access to others. It
prevents essential information from reaching the wrong people. Data encryption is an excellent
example of ensuring confidentiality.
Integrity
This principle ensures that the data is authentic, accurate, and safeguarded from unauthorized
modification by threat actors or accidental user modification. If any modifications occur, certain
measures should be taken to protect the sensitive data from corruption or loss and speedily recover
from such an event. In addition, it indicates to make the source of information genuine.
Availability
This principle makes the information to be available and useful for its authorized people always. It
ensures that these accesses are not hindered by system malfunction or cyber-attacks.
Types of Cyber Security Threats
Malware
Malware means malicious software, which is the most common cyber attacking tool. It is used by the
cybercriminal or hacker to disrupt or damage a legitimate user's system. The following are the
important types of malware created by the hacker:
Virus: It is a malicious piece of code that spreads from one device to another. It can clean
files and spreads throughout a computer system, infecting files, stoles information, or damage
device.
Spyware: It is a software that secretly records information about user activities on their
system. For example, spyware could capture credit card details that can be used by the
cybercriminals for unauthorized shopping, money withdrawing, etc.
Trojans: It is a type of malware or code that appears as legitimate software or file to fool us
into downloading and running. Its primary purpose is to corrupt or steal data from our device
or do other harmful activities on our network.
Ransomware: It's a piece of software that encrypts a user's files and data on a device,
rendering them unusable or erasing. Then, a monetary ransom is demanded by malicious
actors for decryption.
Worms: It is a piece of software that spreads copies of itself from device to device without
human interaction. It does not require them to attach themselves to any program to steal or
damage the data.
Adware: It is an advertising software used to spread malware and displays advertisements on
our device. It is an unwanted program that is installed without the user's permission. The main
objective of this program is to generate revenue for its developer by showing the ads on their
browser.
Botnets: It is a collection of internet-connected malware-infected devices that allow
cybercriminals to control them. It enables cybercriminals to get credentials leaks,
unauthorized access, and data theft without the user's permission.
Phishing
Phishing is a type of cybercrime in which a sender seems to come from a genuine organization like
PayPal, eBay, financial institutions, or friends and co-workers. They contact a target or targets via
email, phone, or text message with a link to persuade them to click on that links. This link will
redirect them to fraudulent websites to provide sensitive data such as personal information, banking
and credit card information, social security numbers, usernames, and passwords. Clicking on the link
will also install malware on the target devices that allow hackers to control devices remotely.
Man-in-the-middle (MITM) attack
A man-in-the-middle attack is a type of cyber threat (a form of eavesdropping attack) in which a
cybercriminal intercepts a conversation or data transfer between two individuals. Once the
cybercriminal places themselves in the middle of a two-party communication, they seem like genuine
participants and can get sensitive information and return different responses. The main objective of
this type of attack is to gain access to our business or customer data. For example, a cybercriminal
could intercept data passing between the target device and the network on an unprotected Wi-Fi
network.
Distributed denial of service (DDoS)
It is a type of cyber threat or malicious attempt where cybercriminals disrupt targeted servers,
services, or network's regular traffic by fulfilling legitimate requests to the target or its surrounding
infrastructure with Internet traffic. Here the requests come from several IP addresses that can make
the system unusable, overload their servers, slowing down significantly or temporarily taking them
offline, or preventing an organization from carrying out its vital functions.
Brute Force
A brute force attack is a cryptographic hack that uses a trial-and-error method to guess all possible
combinations until the correct information is discovered. Cybercriminals usually use this attack to
obtain personal information about targeted passwords, login info, encryption keys, and Personal
Identification Numbers (PINS).
SQL Injection (SQLI)
SQL injection is a common attack that occurs when cybercriminals use malicious SQL scripts for
backend database manipulation to access sensitive information. Once the attack is successful, the
malicious actor can view, change, or delete sensitive company data, user lists, or private customer
details stored in the SQL database.
Domain Name System (DNS) attack
A DNS attack is a type of cyberattack in which cyber criminals take advantage of flaws in the Domain
Name System to redirect site users to malicious websites (DNS hijacking) and steal data from affected
computers. It is a severe cybersecurity risk because the DNS system is an essential element of the
internet infrastructure.
Benefits of Cybersecurity
Service-Oriented Terminologies
Services - The services are the logical entities defined by one or more published interfaces.
Service provider - It is a software entity that implements a service specification.
Service consumer - It can be called as a requestor or client that calls a service provider. A
service consumer can be another service or an end-user application.
Service locator - It is a service provider that acts as a registry. It is responsible for examining
service provider interfaces and service locations.
Service broker - It is a service provider that pass service requests to one or more additional
service providers.
Characteristics of SOA
Functional aspects
Transport - It transports the service requests from the service consumer to the service
provider and service responses from the service provider to the service consumer.
Service Communication Protocol - It allows the service provider and the service consumer
to communicate with each other.
Service Description - It describes the service and data required to invoke it.
Service - It is an actual service.
Business Process - It represents the group of services called in a particular sequence
associated with the particular rules to meet the business requirements.
Service Registry - It contains the description of data which is used by service providers to
publish their services.
Guiding Principles of SOA:
1. Standardized service contract: Specified through one or more service description documents.
2. Loose coupling: Services are designed as self-contained components, maintain relationships that
minimize dependencies on other services.
3. Abstraction: A service is completely defined by service contracts and description documents.
They hide their logic, which is encapsulated within their implementation.
4. Reusability: Designed as components, services can be reused more effectively, thus reducing
development time and the associated costs.
5. Autonomy: Services have control over the logic they encapsulate and, from a service consumer
point of view, there is no need to know about their implementation.
6. Discoverability: Services are defined by description documents that constitute supplemental
metadata through which they can be effectively discovered. Service discovery provides an
effective means for utilizing third-party resources.
7. Composability: Using services as building blocks, sophisticated and complex operations can be
implemented. Service orchestration and choreography provide a solid support for composing
services and achieving business goals.
Advantages of SOA:
Service reusability: In SOA, applications are made from existing services. Thus, services can
be reused to make many applications.
Easy maintenance: As services are independent of each other they can be updated and
modified easily without affecting other services.
Platform independent: SOA allows making a complex application by combining services
picked from different sources, independent of the platform.
Availability: SOA facilities are easily available to anyone on request.
Reliability: SOA applications are more reliable because it is easy to debug small services
rather than huge codes
Scalability: Services can run on different servers within an environment, this increases
scalability
Disadvantages of SOA:
RESTful Services
RESTful Services are client and server applications that communicate over the WWW. RESTful
Services are REST Architecture based Services. In REST Architecture, everything is a resource.
RESTful Services provides communication between software applications running on different
platforms and frameworks. We can consider web services as code on demand. A RESTful Service is a
function or method which can be called by sending an HTTP request to a URL, and the service returns
the result as the response. In this tutorial, you will learn the basics of RSETful Services with suitable
examples and projects.
Following four HTTP methods are commonly used in REST based architecture.
Understandability − Both the Server and the Client should be able to understand and utilize
the representation format of the resource.
Completeness − Format should be able to represent a resource completely. For example, a
resource can contain another resource. Format should be able to represent simple as well as
complex structures of resources.
Linkablity − A resource can have a linkage to another resource, a format should be able to
handle such situations.
However, at present most of the web services are representing resources using either XML or JSON
format. There are plenty of libraries and tools available to understand, parse, and modify XML and
JSON data.
RESTful Services Messages
RESTful Web Services make use of HTTP protocols as a medium of communication between client
and server. A client sends a message in form of a HTTP Request and the server responds in the form
of an HTTP Response. This technique is termed as Messaging. These messages contain message data
and metadata i.e. information about message itself. Let us have a look on the HTTP Request and
HTTP Response messages for HTTP 1.1.
HTTP Request
An HTTP Request has five major parts
Verb − Indicates the HTTP methods such as GET, POST, DELETE, PUT, etc.
URI − Uniform Resource Identifier (URI) to identify the resource on the server.
HTTP Version − Indicates the HTTP version. For example, HTTP v1.1.
Request Header − Contains metadata for the HTTP Request message as key-value pairs. For
example, client (or browser) type, format supported by the client, format of the message body,
cache settings, etc.
Request Body − Message content or Resource representation.
HTTP Response
Status/Response Code − Indicates the Server status for the requested resource. For example,
404 means resource not found and 200 means response is ok.
HTTP Version − Indicates the HTTP version. For example HTTP v1.1.
Response Header − Contains metadata for the HTTP Response message as keyvalue pairs.
For example, content length, content type, response date, server type, etc.
Response Body − Response message content or Resource representation.
RESTful Services Statelessness
RESTful Web Service should not keep a client state on the server. This restriction is called
Statelessness. It is the responsibility of the client to pass its context to the server and then the server
can store this context to process the client's further request. For example, session maintained by server
is identified by session identifier passed by the client.
RESTful Web Services should adhere to this restriction, that the web service methods are not storing
any information from the client they are invoked from.
RESTful Services Security
As RESTful Services work with HTTP URL Paths, it is very important to safeguard a RESTful Web
Service in the same manner as a website is secured.
Following are the best practices to be adhered to while designing a RESTful Service −
Validation − Validate all inputs on the server. Protect your server against SQL or NoSQL
injection attacks.
Session Based Authentication − Use session based authentication to authenticate a user
whenever a request is made to a Web Service method.
No Sensitive Data in the URL − Never use username, password or session token in a URL,
these values should be passed to Web Service via the POST method.
Restriction on Method Execution − Allow restricted use of methods like GET, POST and
DELETE methods. The GET method should not be able to delete data.
Validate Malformed XML/JSON − Check for well-formed input passed to a web service
method.
Throw generic Error Messages − A web service method should use HTTP error messages
like 403 to show access forbidden, etc.
Service Engineering
Service engineering, also called service-oriented software engineering, is a software engineering
process that attempts to decompose the system into self-running units that either perform services or
expose services (reusable services). Service oriented applications are designed around loosely-coupled
services, meaning there are simple standards and protocols which are followed by all concerned,
while behind them are a wide variety of technological services which can be far more complex. The
reusable services are often provided by many different service providers, all of whom collaborate
dynamically with service users and service registries.
The Actors in Service Engineering
There are three types of actors in a service-oriented environment. These are:
Service providers: These are software services that publish their capabilities and their
availability with service registries.
Service users: These are software systems (which may be services themselves) that use the
services provided by service providers. Service users can use service registries to discover
and locate the service providers they need.
Service registries: These are constantly evolving catalogs of information that can be queried
to see what type of services are available.
Characteristics of Services in Service Engineering
The provision of the service is independent of the application using the service.
Services are platform independent and implementation language independent.
They are easier to test since they are small and independent. This makes them more reliable
for use in applications.
Since services are individual pieces of functionality rather than a large piece of code, they can
be reused in multiple applications, therefore lowering the cost of development of future tools.
Services can be developed in parallel since they are independent of each other. This reduces
the time it takes to develop the software.
Since the location of a service doesn't matter, the service can be moved to a more powerful
server if needed. There can also be separate instances of the service running on different
servers.
Service Engineer Responsibilities:
Using various strategies and tools to provide effective solutions to customers' concerns.
Communicating with clients, engineers, and other technicians to ensure that services are
delivered effectively.
Promptly following up on service requests and providing customer feedback.
Monitoring equipment and machinery performance and developing preventative maintenance
measures.
Conducting quality assurance and safety checks on all equipment.
Delivering demonstrations to ensure that customers are educated on safe and effective
equipment use.
Providing recommendations about new features and product improvements.
Monitoring inventory and reordering materials when needed.
Conducting research and attending workshops to remain abreast of industry developments.
Writing reports and presenting findings to Managers and Supervisors on a regular basis.
Service Engineer Requirements:
Service Composition
Service composition is a collection of services where, many smaller services are combined together to
a larger service.
Below diagram illustrates the service composition:
In the above diagram, Service A, Service B and Service C are smaller services.
Large service is composed by combining services A,B and C together.
Service Composition Performance
The services communicate with each other through a network just like component composition where
inter-service communication is too slow as compared to inter-component communication taking place
in the same application. The performance will be bad if the services communicate internally through
ESB (Enterprise Service Bus) and larger services are decomposed to many smaller services.
Service compositions can be categorized into primitive and complex variations. Simple logic was
implemented through point-to-point exchanges or primitive compositions in early service-oriented
solutions. As the technology developed, complex compositions became more familiar.
Systems engineering
Systems engineering is an interdisciplinary field of engineering and engineering management that
focuses on how to design, integrate, and manage complex systems over their life cycles. At its core,
systems engineering utilizes systems thinking principles to organize this body of knowledge. The
individual outcome of such efforts, an engineered system, can be defined as a combination of
components that work in synergy to collectively perform a useful function.
Issues such as requirements engineering, reliability, logistics, coordination of different teams, testing
and evaluation, maintainability and many other disciplines necessary for successful system design,
development, implementation, and ultimate decommission become more difficult when dealing with
large or complex projects. Systems engineering deals with work-processes, optimization methods, and
risk management tools in such projects. Systems engineering ensures that all likely aspects of a
project or system are considered and integrated into a whole.
The systems engineering process is a discovery process that is quite unlike a manufacturing process.
A manufacturing process is focused on repetitive activities that achieve high quality outputs with
minimum cost and time. The systems engineering process must begin by discovering the real
problems that need to be resolved, and identifying the most probable or highest impact failures.
Holistic view
Systems engineering focuses on analyzing and eliciting customer needs and required functionality
early in the development cycle, documenting requirements, then proceeding with design synthesis and
system validation while considering the complete problem, the system lifecycle. This includes fully
understanding all of the stakeholders involved. Oliver et al. claim that the systems engineering
process can be decomposed into
System architecture,
System model, Modeling, and Simulation,
Optimization,
System dynamics,
Systems analysis,
Statistical analysis,
Reliability analysis, and
Decision making
Taking an interdisciplinary approach to engineering systems is inherently complex since the behavior
of and interaction among system components is not always immediately well-defined or understood.
Defining and characterizing such systems and subsystems and the interactions among them is one of
the goals of systems engineering. In doing so, the gap that exists between informal requirements from
users, operators, marketing organizations, and technical specifications is successfully bridged.
Systems engineering processes
Systems engineering processes encompass all creative, manual and technical activities necessary to
define the product and which need to be carried out to convert a system definition to a sufficiently
detailed system design specification for product manufacture and deployment. Design and
development of a system can be divided into four stages, each with different definitions:
An abstraction of reality designed to answer specific questions about the real world
An imitation, analogue, or representation of a real world process or structure; or
A conceptual, mathematical, or physical tool to assist a decision maker.
Together, these definitions are broad enough to encompass physical engineering models used in the
verification of a system design, as well as schematic models like a functional flow block diagram and
mathematical models used in the trade study process. This section focuses on the last.
The main reason for using mathematical models and diagrams in trade studies is to provide estimates
of system effectiveness, performance or technical attributes, and cost from a set of known or
estimable quantities. Typically, a collection of separate models is needed to provide all of these
outcome variables. The heart of any mathematical model is a set of meaningful quantitative
relationships among its inputs and outputs. These relationships can be as simple as adding up
constituent quantities to obtain a total, or as complex as a set of differential equations describing the
trajectory of a spacecraft in a gravitational field. Ideally, the relationships express causality, not just
correlation. Furthermore, key to successful systems engineering activities are also the methods with
which these models are efficiently and effectively managed and used to simulate the systems.
Sociotechnical Systems
Sociotechnical systems (STS) in organizational development is an approach to complex
organizational work design that recognizes the interaction between people and technology in
workplaces. The term also refer to coherent systems of human relations, technical objects, and
cybernetic processes that inhere to large, complex infrastructures. Social society, and its constituent
substructures, qualify as complex sociotechnical systems.
Sociotechnical theory is about joint optimization, with a shared emphasis on achievement of both
excellence in technical performance and quality in people's work lives. Sociotechnical theory, as
distinct from sociotechnical systems, proposes a number of different ways of achieving joint
optimisation. They are usually based on designing different kinds of organisation, according to which
the functional output of different sociotechnical elements leads to system efficiency, productive
sustainability, user satisfaction, and change management.
Sociotechnical refers to the interrelatedness of social and technical aspects of an organization.
Sociotechnical theory is founded on two main principles:
One is that the interaction of social and technical factors creates the conditions for successful
organizational performance. This interaction consists partly of linear "cause and effect"
relationships and partly from "non-linear", complex, even unpredictable relationships.
Whether designed or not, both types of interaction occur when socio and technical elements
are put to work.
The corollary of this, and the second of the two main principles, is that optimization of each
aspect alone tends to increase not only the quantity of unpredictable, "un-designed"
relationships, but those relationships that are injurious to the system's performance.
Sustainability
Standalone, incremental improvements are not sufficient to address current, let alone future
sustainability challenges. These challenges will require deep changes of sociotechnical systems.
Theories on innovation systems; sustainable innovations; system thinking and design; and
sustainability transitions, among others, have attempted to describe potential changes capable of
shifting development towards more sustainable directions.
Sociotechnical perspectives also form a crucial role in the creation of systems that have long term
sustainability. In the development of new systems, the consideration of sociotechnical factors from the
perspectives of the affected stakeholders ensures that a sustainable system is created which is both
engaging and benefits everyone involved.
Any organisation that tries into becoming sustainable must take into consideration the many
dimensions - financial, ecological and (socio-)technical. However, for many stakeholders the main
aim of sustainability is to be economically viable. Without long term economic sustainability, the very
existence of the organisation’s existence could come under question, potentially shutting the business
down.
Benefits of sociotechnical systems
Viewing the work system as a whole, making it easier to discuss and analyse
More organised approach by even outlining basic understanding of a work system
A readily usable analysis method making it more adaptable for performing analysis of a work
system
Does not require guidance by experts and researchers
Reinforces the idea that a work system exists to produce a product(s)/service(s)
Easier to theorize potential staff reductions, job roles changing and reorganizations
Encourages motivation and good will while reducing the stress from monitoring
Conscientious that documentation and practice may differ
Process improvement
Process improvement in organizational development is a series of actions taken to identify, analyze
and improve existing processes within an organization to meet new goals and objectives. These
actions often follow a specific methodology or strategy to create successful results.
Task analysis
Task analysis is the analysis of how a task is accomplished, including a detailed description of both
manual and mental activities, task and element durations, task frequency, task allocation, task
complexity, environmental conditions, necessary clothing and equipment, and any other unique
factors involved in or required for one or more people to perform a given task. This information can
then be used for many purposes, such as personnel selection and training, tool or equipment design,
procedure design and automation.
Job design
Job design or work design in organizational development is the application of sociotechnical systems
principles and techniques to the humanization of work, for example, through job enrichment. The
aims of work design to improved job satisfaction, to improved through-put, to improved quality and to
reduced employee problems, e.g., grievances, absenteeism.
Evolution of socio-technical systems
The evolution of socio-technical design has seen its development from being approached as a social
system exclusively. The realisation of the joint optimisation of social and technical systems was later
realised. It was divided into sections where primary work which looks into principles and description,
and how to incorporate technical designs on a macrosocial level.
Conceptual Design
Conceptual design is a framework for establishing the underlying idea behind a design and a plan for
how it will be expressed visually.It is related to the term “concept art”, which is an illustration (often
used in the preproduction phase of a film or a video game) that conveys the vision of the artist for
how the final product might take form. Similarly, conceptual design occurs early on in the design
process, generally before fine details such as exact color choices or illustration style. The only tools
required are a pen and paper.
Conceptual design has the root word “concept,” which describes the idea and intention behind the
design. This is contrasted by “execution”, which is the implementation and shape that a design
ultimately takes.
Essentially, the concept is the plan, and the execution is the follow-through action. Designs are often
evaluated for quality in both of these areas: concept vs execution. In other words, a critic might ask:
what is a design trying to say, and how well does it say it?Conceptual design is what allows designers
to evoke the underlying idea design through imagery. Design by Katrin Chern
Most importantly, you can’t have one without the other. A poorly executed design with a great
concept will muddle its message with an unappealing art style. A well-executed design with a poor
concept might be beautiful, but it will do a poor job of connecting with viewers and/or expressing a
brand.For the purposes of this article, we’ll focus on the concept whereas execution involves
studying the particulars of design technique.
The purpose of conceptual design
The purpose of conceptual design is to give visual shape to an idea. Towards that end, there are three
main facets to the goals of conceptual design:
Conceptual design bridges the gap between the message and the imagery. Design by MangoCrew
To establish a basis of logic
Artistic disciplines have a tendency to be governed by emotion and gut feeling. Designs, however, are
meant to be used. Whether it is a piece of software or a logo, a design must accomplish something
practical such as conveying information or expressing a brand—all on top of being aesthetically
pleasing.Conceptual design is what grounds the artwork in the practical questions of why and how.
To create a design language
Since the concept is essentially just an idea, designers must bridge the gap between abstract thought
and visual characteristics. Design language describes using design elements purposefully to
communicate and evoke meaning.
As explained earlier, the conceptual design phase isn’t going to go as far as planning every stylistic
detail, but it will lay the groundwork for meaningful design choices later on.Conceptual design exists
to make sure that imagery communicates its message effectively. Design by BINATANG
To achieve originality
There’s a famous saying that nothing is original, and this is true to an extent. The practice of design—
like any artistic discipline—is old, with designers building on the innovations of those who came
before.
But you should at least aspire to stand on the shoulders of those giants. And the concept and ideation
phase in the design process is where truly original creative sparks are most likely to happen.
The conceptual design approach
Now that we understand what conceptual design is and its purpose, we can talk about how it is done.
The conceptual design approach can be broken down into four steps and we’ll discuss each in detail.
It is important to note that these steps don’t have to be completed in any particular order. For
example, many designers jump to doodling without any concrete plan of what they are trying to
achieve. How a person comes up with ideas is personal and depends on whatever helps them think.
It can also be related to how you best learn—e.g. people who learn best by taking notes might have an
easier time organizing their concepts by writing them down. And sometimes taking a more analytical
approach (such as research) early on can constrain creativity whereas the opposite can also lead to
creativity without a purpose.
Whatever order you choose, we would recommend that you do go through all of the steps to get a
concept that is fully thought through. With that out of the way, let’s dive into the conceptual design
process.
First you have to unravel the problem. Conceptual design by Fe Melo
1. Definition
You must start your design project by asking why the project is necessary. What is the specific goal of
the design and what problem is it meant to solve?
Defining the problem can be a lot trickier than it at first appears because problems can be complex.
Often, a problem can be a symptom of deeper issues, and you want to move beyond the surface to
uncover the root causes.
One technique for doing so is known as the Five Whys, in which you are presented with a problem
and keep asking “Why?” until you arrive at a more nuanced understanding. Otherwise, if you fail to
get to the exact root of the problem, your design solution would have been ultimately flawed. And the
design solution—the answer to the problem—is just another way of describing the concept.
2. Research
Designs must eventually occupy space (whether physical or digital) in the real world. For this reason,
a design concept must be grounded in research, where you will understand the context in which the
design must fit.
Researching the people who will interact with the design is essential to solidifying the concept.
Design by Digital Man ✅
This can start with getting information on the client themselves—who is the brand and what is their
history and mission, their personality? You must also consider the market.
Who are the people that will interact with the design? In order for the concept to speak effectively to
these people, you must conduct target audience research to understand who they are and what they are
looking for in a design. Similarly, researching similar designs from competitors can help you
understand industry conventions as well as give you ideas for how to set your concept apart.
Finally, you will want to research the work of other designers in order to gather reference material and
inspiration, especially from those you find particularly masterful. Doing so can show you conceptual
possibilities you might never have imagined, challenging you to push your concepts. You’ll want to
collect these in a mood board, which you will keep handy as you design.
3. Verbal ideation
Concepts are essentially thoughts—which is to say, they are scattered words in our minds. In order to
shape a concept into something substantial, you need to draw some of those words out. This phase is
generally referred to as brainstorming, in which you will define your concept verbally.
In graphic design, especially in regards to logos, the brand name is often the starting point for
generating concepts of representational imagery. Design by -Z-. This can be as straightforward as
simply posing the problem (see the first step) and creating a list of potential solutions.
There are also some helpful word-based techniques, such as mind-mapping or free association. In
both of these cases, you generally start with a word or phrase (for logos, this is usually the brand name
and for other designs, it can be based on some keywords from the brief).You then keep writing
associated words that pop into your head until you have a long list. It is also important to give
yourself a time limit so that you brainstorm quickly without overthinking things.
The purpose of generating words is that these can help you come up with design characteristics (in the
next step) to express your concept. For example, the word “freedom” can translate into loose flowing
lines or an energetic character pose.
Ultimately, it is helpful to organize these associated ideas into a full sentence or phrase that articulates
your concept and what you are trying to accomplish. This keeps your concept focused throughout the
design process.
4. Visual ideation
At some point, concepts must make the leap from abstract ideas to a visual design. Designers usually
accomplish this through sketching.One helpful approach is to create thumbnails, which are sketches of
a design that are small enough to fit several on the same page.
Like brainstorming (or verbal ideation) the goal is to come up with sketches fast so that your ideas can
flow freely. You don’t want to get hung up on your first sketch or spend too much time on minute
detail. Right now, you are simply visualizing possible interpretations of the concept.The concept is
often visually expressed through a sketch. The final design may differ from the conceptual sketch,
once the design has been refined with detail and color. Concept illustration by simply.dikka
This phase is important because while you may think you have the concept clear in your mind, seeing
it on the page is the true test of whether it holds water. You may also surprise yourself with a sketch
that articulates your concept better than you could have planned.Once you have a couple sketches that
you like, you can refine this into a much larger and more detailed sketch. This will give you a
presentable version from which you can gather feedback.
Dream big with conceptual design
The remainder of the design process is spent executing the concept. You will use the software of your
choice to create a working version of your design, such as a prototype or mockup. Assuming your
design is approved by the client, test users or any other stakeholders, you can go about creating the
final version. If not, use conceptual design to revisit the underlying concept.
Conceptual design is the bedrock of any design project. For this reason, it is extremely important to
get right. Creating a concept can be difficult and discouraging—over time, you might find your
garbage bin overflowing with rejected concepts.
But this is exactly why it is so helpful to have a delineated process like conceptual design to guide you
through the messy work of creating ideas. But at the end of the day, getting a design of value will
require both a great concept and a skilled designer.
System Procurement
The responsibility of ensuring an uninterrupted supply of goods and services required for the smooth
functioning of the organization lies with the procurement department. The procurement manager
ensures that the right quantity of goods at optimal costs are available for various departments of the
organization, without compromising on the quality of goods/services.
Importance of Procurement Management System
A well-managed procurement function improves business outcomes significantly. Before making a
purchase, the procurement department needs to analyze the market to ensure that the best deal is
made. Here are some of the important functions carried out by the procurement department:
Working on Purchase Deals:
Purchase managers need to perform an in-depth analysis of the market before concluding a purchase.
Getting the best deal for goods and services in terms of pricing and quality is an important
procurement function. Thorough evaluation of all the vendors based on their reputation, timely
delivery of orders, and quoted price needs to be done by the procurement team.
Compliance Management:
One of the most important responsibilities of the procurement manager is to ensure that every
purchase order adheres to the policies and processes defined within the organization and compliant
with the procurement laws and regulations. To ensure compliance with laws and regulations, the
procurement manager needs to stay updated on the changes in laws and regulations and update the
company policies accordingly. An e-procurement system that flags inconsistencies and non-
compliance will aid the procurement manager to identify and resolve compliance related issues
efficiently.
Establish Strong Vendor Relationships:
Maintaining a strong and reliable vendor base is extremely important in procurement. Continuous
review of vendor contracts and relationships is needed to ensure that the organization’s requirements
are adequately met. All vendors approved by the procurement department must provide good quality
service and goods and adhere to terms and policies while in contract with the company. Procurement
KPIs on vendor performance must be reviewed periodically to ensure consistent performance.
Management and Coordination of Procurement Staff:
The procurement manager must enable seamless communication between all the stakeholders. The
procurement management system must ensure that daily tasks and operations within the procurement
team are well planned and coordinated. Completing tasks before deadlines, quick resolution of issues
and process bottlenecks, and quality inspection of delivered goods and services are some of the
routine tasks performed by the procurement staff.
Maintaining and Updating Data:
Procurement data needs to be maintained and updated for audit and regulatory compliance. Data on
the items purchased, cost price, supplier information, inventory list, and delivery information must be
updated by the team. Accurate documentation forms the basis for reporting and budgeting.
An e-procurement software empowers the procurement team to carry out its functions efficiently with
optimal resource utilization. Procurement solutions ensure seamless communication and coordination
between various procurement tasks, accurate and consistent documentation, and cost and time
savings. Procurement staff is free from repetitive tasks so that they can focus on finer details and
contribute more towards organizational growth. An e-procurement system enables CPOs to make
data-driven business decisions.
Archaic procurement systems are not equipped to deal with the complexity of modern procurement
processes. Manual procurement systems slow down the processing speed and introduce errors and
discrepancies in the procurement process. Deploying a cloud procurement system eliminates the
inefficiencies and discrepancies in the procurement workflow.
Procurement System
A procurement or purchasing system helps organizations streamline and automate the process of
purchasing goods or services and manage inventory. All the processes related to the procurement
function can be efficiently handled through the procurement management system.
What is the meaning of procurement management? Procurement management is also referred to as the
source-to-settle or procure-to-pay process. The procurement management system manages all the
steps from sourcing to payout. In addition to managing the entire purchase lifecycle, procurement
management is also about managing vendor relationships and streamlining the procurement process.
Procurement systems empower procurement managers to manage all the steps in the purchasing
process with ease. Procurement management is a complex process that spans several interrelated
activities like transactional purchasing of services and goods, inventory management, integration with
accounts payable, and updating supporting documents. A procurement application helps manage
activities like:
System Development
Systems development is the procedure of defining, designing, testing, and implementing a new
software application or program. It comprises of the internal development of customized systems, the
establishment of database systems, or the attainment of third party developed software. In this system,
written standards and techniques must monitor all information systems processing functions. The
management of company must describe and execute standards and embrace suitable system
development life cycle practise that manage the process of developing, acquiring, implementing, and
maintaining computerized information systems and associated technology.
System Development Management Life-cycle
It is maintained in management studies that effectual way to protect information and information
systems is to incorporate security into every step of the system development process, from the
initiation of a project to develop a system to its disposition. The manifold process that begins with the
initiation, analysis, design, and implementation, and continues through the maintenance and disposal
of the system, is called the System Development Life Cycle (SDLC).
Phases of System Development
A system development project comprises of numerous phases, such as feasibility analysis,
requirements analysis, software design, software coding, testing and debugging, installation and
maintenance.
1. A feasibility study is employed to decide whether a project should proceed. This will
include an initial project plan and budget estimates for future stages of the project. In the
example of the development of a central ordering system, a feasibility study would look
at how a new central ordering system might be received by the various departments and
how costly the new system would be relative to improving each of these individual
systems.
2. Requirement analysis recognises the requirements for the system. This includes a detailed
analysis of the specific problem being addressed or the expectations of a particular
system. It can be said that analysis will coherent what the system is supposed to do. For
the central ordering system, the analysis would cautiously scrutinize existing ordering
systems and how to use the best aspects of those systems, while taking advantage of the
potential benefits of more centralized systems.
3. The design phase consist of determining what programs are required and how they are
going to interact, how each individual program is going to work, what the software
interface is going to look like and what data will be required. System design may use
tools such as flowcharts and pseudo-code to develop the specific logic of the system.
4. Implementation stage include the design which is to be translated into code. This requires
choosing the most suitable programming language and writing the actual code needed to
make the design work. In this stage, the central ordering system is essentially coded using
a particular programming language. This would also include developing a user interface
that the various departments are able to use efficiently.
5. Testing and debugging stage encompasses testing individual modules of the system as
well as the system as a whole. This includes making sure the system actually does what is
expected and that it runs on intended platforms. Testing during the early stages of a
project may involve using a prototype, which meets some of the very basic requirements
of the system but lacks many of the details.
6. In Installation phase, the system is implemented so that it becomes part of the workflows
of the organization. Some training may be needed to make sure employees get happy with
using the system. At this stage, the central ordering system is installed in all departments,
replacing the older system.
7. All systems need some types of maintenance. This may consist of minor updates to the
system or more drastic changes due to unexpected circumstances. As the organization and
its departments evolve, the ordering process may require some modifications. This makes
it possible to get the most out of a new centralized system.
phases of the system development cycle
Whitten and Bentley (1998) recommended following categories of system development project
lifecycle:
1. Planning
2. Analysis
3. Design
4. Implementation
5. Support
There are many different SDLC models and methodologies, but each usually consists of a series of
defined steps such as Fountain, Spiral, rapid prototyping, for any SDLC model that is used,
information security must be integrated into the SDLC to ensure appropriate protection for the
information that the system will transmit, process, and store.
System development life-cycle models (Source: Conrick, 2006))
Fountain Model Recognizes that there is considerable overlap of activities throughout the
development cycle.
Spiral model Emphasis the need to go back and reiterate earlier stages like a series of short water
fall cycle, each producing an early prototype representing the part of entire cycle.
Build and fix Write some programming code, keeps modifying it until the customer is happy.
model Without planning this is very open ended and risky.
Rapid Emphasis is on creating a prototype that look and act like the desired product in order
prototyping to test its usefulness. Once the prototype is approved, it is discarded and real
model software is written.
Incremental Divides the product into builds, where sections of the projects are created and tested
model separately.
Synchronize Combines the advantages of spiral models with technology of overseeing and
and stabilise managing source code. This method allows many teams to work efficiently in
model parallel. It was defined by David Yoffe of Harvard University and Michael
Cusumano of Massachusetts institute of technology who studied Microsoft
corporation developed internet explorer and how the Netscape communication
corporation developed communicator finding common thread In the ways the two
companies worked.
Waterfall Model
The Waterfall Model signifies a traditional type of system development project lifecycle. It builds
upon the basic steps associated with system development project lifecycle and uses a top-down
development cycle in completing the system.
Walsham (1993) outlined the steps in the Waterfall Model which are as under:
1. A preliminary evaluation of the existing system is conducted and deficiencies are then
identified. This can be done by interviewing users of the system and consulting with
support personnel.
2. The new system requirements are defined. In particular, the deficiencies in the existing
system must be addressed with specific proposals for improvement.
3. The proposed system is designed. Plans are developed and delineated concerning the
physical construction, hardware, operating systems, programming, communications, and
security issues.
4. The new system is developed and the new components and programs are obtained and
installed.
5. Users of the system are then trained in its use, and all aspects of performance are tested. If
necessary, adjustments must be made at this stage.
6. The system is put into use. This can be done in various ways. The new system can be
phased in, according to application or location, and the old system is gradually replaced.
In some cases, it may be more cost-effective to shut down the old system and implement
the new system all at once.
7. Once the new system is up and running for a while, it should be exhaustively evaluated.
Maintenance must be kept up rigorously at all times.
8. Users of the system should be kept up-to-date concerning the latest modifications and
procedures.
On the basis of the Waterfall Model, if system developers find problems associated with a
step, an effort is made to go back to the previous step or the specific step in which the
problem occurred, and fix the problem by completing the step once more.
the model's development schedule
Fountain model: The Fountain model is a logical enhancement to the Waterfall model. This model
allows for the advancement from various stages of software development regardless of whether or not
enough tasks have been completed to reach it.
Prototyping Model: The prototyping paradigm starts with collecting the requirements. Developer
and customer meet and define the overall objectives for the software, identify whatever requirements
are known, and outline areas where further definition is mandatory. The prototype is appraised by the
customer/user and used to improve requirements for the software to be developed.
Major Advantages of this Model include
1. When prototype is presented to the user, he gets a proper clearness and functionality of
the software and he can suggest changes and modifications.
2. It determines the concept to prospective investors to get funding for project and thus gives
clear view of how the software will respond.
3. It decreases risk of failure, as potential risks can be recognized early and alleviation steps
can be taken thus effective elimination of the potential causes is possible.
4. Iteration between development team and client provides a very good and conductive
environment during project. Both the developer side and customer side are coordinated.
5. Time required to complete the project after getting final the SRS reduces, since the
developer has a better idea about how he should approach the project.
Main drawbacks of this model are that Prototyping is typically done at the cost of the developer. So it
should be done using nominal resources. It can be done using Rapid Application Development tools.
Sometimes the start-up cost of building the development team, focused on making the prototype is
high. It is a slow process and too much involvement of client is not always favoured by the creator.
Figure: different phases of Prototyping model
Uses of prototyping:
1. Verifying user needs
2. Verifying that design = specifications
3. Selecting the “best” design
4. Developing a conceptual understanding of novel situations
5. Testing a design under varying environments
6. Demonstrating a new product to upper management
7. Implementing a new system in the user environment quickly
Rapid Application Development
This model is based on prototyping and iterative development with no detailed planning involved. The
process of writing the software itself involves the planning required for developing the product. Rapid
Application development focuses on gathering customer requirements through workshops or focus
groups, early testing of the prototypes by the customer using iterative concept, reuse of the existing
prototypes (components), continuous integration and rapid delivery. There are three main phases to
Rapid Application Development:
1. Requirements planning
2. RAD design workshop
3. Implementation
RAD Model
RAD is used when the team includes programmers and analysts who are experienced with it, there are
pressing reasons for speeding up application development, the project involves a novel ecommerce
application and needs quick results and users are sophisticated and highly engaged with the goals of
the company.
Spiral Model: The spiral model was developed by Barry Boehm in 1988 (Boehm, 1986). This model
is developed to Spiral Model to address the inadequacies of the Waterfall Model. Boehm stated that
“the major distinguishing feature of the Spiral Model is that it creates a risk-driven approach to the
software process rather than a primarily document-driven or code-driven process. A Spiral Model the
first model to elucidate why the iteration matters. The spiral model consists of four phases:
1. Planning
2. Risk Analysis
3. Engineering
4. Evaluation
Major benefits of this model include:
1. Changing requirements can be accommodated.
2. Allows for extensive use of prototypes.
3. Requirements can be captured more accurately.
4. Users see the system early.
5. Development can be divided in to smaller parts and more risky parts can be developed
earlier which helps better risk management.
Main drawbacks of this model are as under:
1. Management is more complex.
2. Conclusion of project may not be recognized early.
3. Not suitable for small or low risk projects (expensive for small projects).
4. Process is difficult
5. Spiral may go indeterminately.
6. Large numbers of intermediate stages require unnecessary documentation.
The spiral model is normally used in huge projects. For example, the military had adopted the spiral
model for its Future Combat Systems program. The spiral model may suit small software applications.
Incremental model: Incremental model is a technique of software development in which the model is
analysed, designed, tested, and implemented incrementally. Some benefits of this model are that it
handles large projects, it has the functionality of the water fall and the prototyping model.
Disadvantages of this model are that when remedying a problem in a functional unit, then all the
functional units will have to be corrected thus taking a lot of time. It needs good planning and
designing.
Increment model of SDLC
There are numerous benefits of integrating security into the system development life cycle that are as
under:
1. Early documentation and alleviation of security vulnerabilities and problems with the
configuration of systems, resulting in lower costs to implement security controls and
mitigation of vulnerabilities;
2. Awareness of potential engineering challenges caused by mandatory security controls.
3. Identification of shared security services and reuse of security strategies and tools that
will reduce development costs and improve the system’s security posture through the
application of proven methods and techniques.
4. Assistance of informed executive decision making through the application of a
comprehensive risk management process in a timely manner.
5. Documentation of important security decisions made during the development process to
inform management about security considerations during all phases of development.
6. Enhanced organization and customer confidence to facilitate adoption and use of systems,
and improved confidence in the continued investment in government systems.
7. Improved systems interoperability and integration that would be difficult to achieve if
security is considered separately at various system levels.
Strengths of System Development Life Cycle
1. Methodologies incorporating this approach have been well tried and tested.
2. This cycle divides development into distinct phases.
3. Makes tasks more manageable.
4. It Offers opportunity for more control over development process.
5. It Provides standards for documentation.
6. It is better than trial and error.
Weaknesses of System Development Life Cycle
1. It fails to realise the “big picture” of strategic management.
2. It is too inflexible to cope with changing requirements.
3. It stresses on “hard” thinking (which is often reflected in documentation that is too
technical).
4. It unable to capture true needs of users.
An operating system (OS) is a software program that serves as a conduit between computer
hardware and the user. It is a piece of software that coordinates the execution of application
programs, software resources, and computer hardware. It also aids in the control of software
and hardware resources such as file management, memory management, input/output, and a
variety of peripheral devices such as a disc drive, printers, and so on. To run other
applications, every computer system must have at least one operating system. Browsers, MS
Office, Notepad Games, and other applications require an environment to execute and fulfill
their functions. This blog explains the evolution of operating systems over the past years.
Evolution of Operating Systems
Operating systems have progressed from slow and expensive systems to today's technology,
which has exponentially increased computing power at comparatively modest costs. So let's have a
detailed look at the evolution of operating systems.
Because of the need to respond to timing demands made by different stimuli/responses, the system
architecture must allow for fast switching between stimulus handlers. Timing demands of different
stimuli are different so a simple sequential loop is not usually adequate. Real-time systems are
therefore usually designed as cooperating processes with a real-time executive controlling these
processes.
Sensor control processes collect information from sensors. May buffer information collected
in response to a sensor stimulus.
Data processor carries out processing of collected information and computes the system
response.
Actuator control processes generate control signals for the actuators.
Processes in a real-time system have to be coordinated and share information.
Process coordination mechanisms ensure mutual exclusion to shared resources. When one process is
modifying a shared resource, other processes should not be able to change that resource. When
designing the information exchange between processes, you have to take into account the fact that
these processes may be running at different speeds.
Producer processes collect data and add it to the buffer. Consumer processes take data from the buffer
and make elements available. Producer and consumer processes must be mutually excluded from
accessing the same element.
The effect of a stimulus in a real-time system may trigger a transition from one state to another. State
models are therefore often used to describe embedded real-time systems. UML state diagrams may be
used to show the states and state transitions in a real-time system.
Architectural patterns for real-time software
Characteristic system architectures for embedded systems:
Observe and React pattern is used when a set of sensors are routinely monitored and
displayed.
Environmental Control pattern is used when a system includes sensors, which provide
information about the environment and actuators that can change the environment.
Process Pipeline pattern is used when data has to be transformed from one representation to
another before it can be processed.
Observe and React pattern description
The input values of a set of sensors of the same types are collected and analyzed. These values are
displayed in some way. If the sensor values indicate that some exceptional condition has arisen, then
actions are initiated to draw the operator's attention to that value and, in certain cases, to take actions
in response to the exceptional value.
Stimuli: Values from sensors attached to the system.
Responses: Outputs to display, alarm triggers, signals to reacting systems.
Processes: Observer, Analysis, Display, Alarm, Reactor.
Used in: Monitoring systems, alarm systems.
Environmental Control pattern description
The system analyzes information from a set of sensors that collect data from the system's
environment. Further information may also be collected on the state of the actuators that are
connected to the system. Based on the data from the sensors and actuators, control signals are sent to
the actuators that then cause changes to the system's environment.
Stimuli: Values from sensors attached to the system and the state of the system actuators.
Responses: Control signals to actuators, display information.
Processes: Monitor, Control, Display, Actuator Driver, Actuator monitor.
Used in: Control systems.
Timing analysis
The correctness of a real-time system depends not just on the correctness of its outputs but also on the
time at which these outputs were produced. In a timing analysis, you calculate how often each process
in the system must be executed to ensure that all inputs are processed and all system responses
produced in a timely way. The results of the timing analysis are used to decide how frequently each
process should execute and how these processes should be scheduled by the real-time operating
system.
Factors in timing analysis:
Deadlines: the times by which stimuli must be processed and some response produced by the
system.
Frequency: the number of times per second that a process must execute so that you are
confident that it can always meet its deadlines.
Execution time: the time required to process a stimulus and produce a response.
Real-time operating systems
Real-time operating systems are specialized operating systems which manage the processes in the
RTS. Responsible for process management and resource (processor and memory) allocation. May be
based on a standard kernel which is used unchanged or modified for a particular application. Do not
normally include facilities such as file management.
Real-time operating system components:
Real-time clock provides information for process scheduling.
Interrupt handler manages aperiodic requests for service.
Scheduler chooses the next process to be run.
Resource manager allocates memory and processor resources.
Dispatcher starts process execution.
The scheduler chooses the next process to be executed by the processor. This depends on a scheduling
strategy which may take the process priority into account. The resource manager allocates memory
and a processor for the process to be executed.
Scheduling strategies:
Non pre-emptive scheduling: once a process has been scheduled for execution, it runs to
completion or until it is blocked for some reason (e.g. waiting for I/O).
Pre-emptive scheduling: the execution of an executing processes may be stopped if a higher
priority process requires service.
Scheduling algorithms include round-robin, rate monotonic, and shortest deadline first.
Embedded System Design
An LCD present here displays the messages like cost, time, welcome..etc. A port delivery exists
where the chocolates are collected.
Hardware
ACVM hardware architecture has the following hardware specifications
Microcontroller 8051
64 KB RAM and 8MB ROM
64 KB Flash memory
Keypad
Mechanical coin sorter
Chocolate channel
Coin channel
USB wireless modem
Power supply
Software of ACVM
Many programs have to be written so that they can be reprogrammed when required in RAM /ROM
like,
Increase in chocolate price
Updating messages to be displayed in LCD
Change in features of the machine.
An Embedded System is a combination of hardware + software to perform a particular function.
There are of two types microprocessors and microcontrollers. While designing an embedded
system certain design constraints and specifications are to consider, so that the developer can
meet the customer expectations and deliver on time. An application of Embedded
system design ACVM explained in this content. Here is a question what is the cause for
environmental constraints while designing an embedded system?
2. Client-server pattern
This pattern consists of two parties; a server and multiple clients. The server component will provide
services to multiple client components. Clients request services from the server and the server
provides relevant services to those clients. Furthermore, the server continues to listen to client
requests.
Usage
3. Master-slave pattern
This pattern consists of two parties; master and slaves. The master component distributes the work
among identical slave components, and computes a final result from the results which the slaves
return.
Usage
In database replication, the master database is regarded as the authoritative source, and
the slave databases are synchronized to it.
Peripherals connected to a bus in a computer system (master and slave drives).
4. Pipe-filter pattern
This pattern can be used to structure systems which produce and process a stream of data. Each
processing step is enclosed within a filter component. Data to be processed is passed through pipes.
These pipes can be used for buffering or for synchronization purposes.
Usage
Compilers. The consecutive filters perform lexical analysis, parsing, semantic analysis,
and code generation.
Workflows in bioinformatics.
5. Broker pattern
This pattern is used to structure distributed systems with decoupled components. These components
can interact with each other by remote service invocations. A broker component is responsible for the
coordination of communication among components.
Usage
Message broker software such as Apache ActiveMQ, Apache Kafka, RabbitMQ and JBoss
Messaging.
6. Peer-to-peer pattern
In this pattern, individual components are known as peers. Peers may function both as a client,
requesting services from other peers, and as a server, providing services to other peers. A peer may
act as a client or as a server or as both, and it can change its role dynamically with time.
Usage
7. Event-bus pattern
This pattern primarily deals with events and has 4 major components; event source, event
listener, channel and event bus. Sources publish messages to particular channels on an event bus.
Listeners subscribe to particular channels. Listeners are notified of messages that are published to a
channel to which they have subscribed before.
Usage
Android development
Notification services
8. Model-view-controller pattern
This pattern, also known as MVC pattern, divides an interactive application in to 3 parts as,
1. model — contains the core functionality and data
2. view — displays the information to the user (more than one view may be defined)
3. controller — handles the input from the user
Usage
9. Blackboard pattern
This pattern is useful for problems for which no deterministic solution strategies are known. The
blackboard pattern consists of 3 main components.
blackboard — a structured global memory containing objects from the solution space
knowledge source — specialized modules with their own representation
control component — selects, configures and executes modules.
Usage
Speech recognition
Vehicle identification and tracking
Protein structure identification
Sonar signals interpretation.
1. Performance Analysis:
The constraints enforced on the response of the system is known as Performance Constraints. This
basically describes the overall performance of the system. This shows how quickly and accurately the
system is responding. It ensures that the real-time system performs satisfactorily.
2. Behavioral Analysis:
The constraints enforced on the stimuli generated by the environment is known as Behavioral
Constraints. This basically describes the behavior of the environment. It ensures that the environment
of a system is well behaved.
Further, the both performance and behavioral constraints are classified into three categories: Delay
Constraint, Deadline Constraint, and Duration Constraint. These are explained as following below.
1. Delay Analysis
A delay constraint describes the minimum time interval between occurrence of two
consecutive events in the real-time system. If an event occurs before the delay constraint, then
it is called a delay violation. The time interval between occurrence of two events should be
greater than or equal to delay constraint.
If D is the actual time interval between occurrence of two events and d is the delay constraint,
then
D >= d
2. Deadline Analysis
A deadline constraint describes the maximum time interval between occurrence of two
consecutive events in the real-time system. If an event occurs after the deadline constraint,
then the result of event is considered incorrect. The time interval between occurrence of two
events should be less than or equal to deadline constraint.
If D is the actual time interval between occurrence of two events and d is the deadline
constraint, then
D <= d
3. Duration Constraint –
Duration constraint describes the duration of an event in real-time system. It describes the
minimum and maximum time period of an event. On this basis it is further classified into two
types:
This process is completely uninterrupted unless a higher priority interrupt occurs during its execution.
Therefore, there must be a strict hierarchy of priority among the interrupts. The interrupt with the
highest priority must be allowed to initiate the process ,
Real-time operating systems employ special-purpose operating systems because conventional
operating systems do not provide such performance.
The various examples of Real-time operating systems are:
o MTS
o Lynx
o QNX
o VxWorks etc.
Applications of Real-time operating system (RTOS):
RTOS is used in real-time applications that must work within specific deadlines. Following are the
common areas of applications of Real-time operating systems are given below.
o Real-time running structures are used inside the Radar gadget.
o Real-time running structures are utilized in Missile guidance.
o Real-time running structures are utilized in on line inventory trading.
o Real-time running structures are used inside the cell phone switching gadget.
o Real-time running structures are utilized by Air site visitors to manipulate structures.
o Real-time running structures are used in Medical Imaging Systems.
o Real-time running structures are used inside the Fuel injection gadget.
o Real-time running structures are used inside the Traffic manipulate gadget.
o Real-time running structures are utilized in Autopilot travel simulators.
Types of Real-time operating system
1. Before testing starts, it’s necessary to identify and specify the requirements of the product in
a quantifiable manner.
Different characteristics quality of the software is there such as maintainability that means the
ability to update and modify, the probability that means to find and estimate any risk, and
usability that means how it can easily be used by the customers or end-users. All these
characteristic qualities should be specified in a particular order to obtain clear test results without
any error.
2. Specifying the objectives of testing in a clear and detailed manner.
Several objectives of testing are there such as effectiveness that means how effectively the
software can achieve the target, any failure that means inability to fulfill the requirements and
perform functions, and the cost of defects or errors that mean the cost required to fix the error. All
these objectives should be clearly mentioned in the test plan.
3. For the software, identifying the user’s category and developing a profile for each user.
Use cases describe the interactions and communication among different classes of users and the
system to achieve the target. So as to identify the actual requirement of the users and then testing
the actual use of the product.
4. Developing a test plan to give value and focus on rapid-cycle testing.
Rapid Cycle Testing is a type of test that improves quality by identifying and measuring the any
changes that need to be required for improving the process of software. Therefore, a test plan is an
important and effective document that helps the tester to perform rapid cycle testing.
Unit Testing
Unit testing involves the testing of each unit or an individual component of the software application. It
is the first level of functional testing. The aim behind unit testing is to validate unit components with
its performance.
A unit is a single testable part of a software system and tested during the development phase of the
application software.
The purpose of unit testing is to test the correctness of isolated code. A unit component is an
individual function or code of the application. White box testing approach used for unit testing and
usually done by the developers.
Whenever the application is ready and given to the Test engineer, he/she will start checking every
component of the module or module of the application independently or one by one, and this process
is known as Unit testing or components testing.
Why Unit Testing?
In a testing level hierarchy, unit testing is the first level of testing done before integration and other
remaining levels of the testing. It uses modules for the testing process which reduces the dependency
of waiting for Unit testing frameworks, stubs, drivers and mock objects are used for assistance in unit
testing.
Generally, the software goes under four level of testing: Unit Testing, Integration Testing, System
Testing, and Acceptance Testing but sometimes due to time consumption software testers does
minimal unit testing but skipping of unit testing may lead to higher defects during Integration Testing,
System Testing, and Acceptance Testing or even during Beta Testing which takes place after the
completion of software application.
Some crucial reasons are listed below:
Unit testing helps tester and developers to understand the base of code that makes them
able to change defect causing code quickly.
Unit testing helps in the documentation.
Unit testing fixes defects very early in the development phase that's why there is a
possibility to occur a smaller number of defects in upcoming testing levels.
It helps with code reusability by migrating code and test cases.
How to execute Unit Testing
In order to execute Unit Tests, developers write a section of code to test a specific function in
software application. Developers can also isolate this function to test more rigorously which reveals
unnecessary dependencies between function being tested and other units so the dependencies can be
eliminated. Developers generally use UnitTest framework to develop automated test cases for unit
testing.
Unit Testing is of two types
Manual
Automated
Unit testing is commonly automated but may still be performed manually. Software Engineering does
not favor one over the other but automation is preferred. A manual approach to unit testing may
employ a step-by-step instructional document.
Under the automated approach-
A developer writes a section of code in the application just to test the function. They would
later comment out and finally remove the test code when the application is deployed.
A developer could also isolate the function to test it more rigorously. This is a more thorough
unit testing practice that involves copy and paste of code to its own testing environment than
its natural environment. Isolating the code helps in revealing unnecessary dependencies
between the code being tested and other units or data spaces in the product. These
dependencies can then be eliminated.
A coder generally uses a UnitTest Framework to develop automated test cases. Using an
automation framework, the developer codes criteria into the test to verify the correctness of
the code. During execution of the test cases, the framework logs failing test cases. Many
frameworks will also automatically flag and report, in summary, these failed test cases.
Depending on the severity of a failure, the framework may halt subsequent testing.
The workflow of Unit Testing is 1) Create Test Cases 2) Review/Rework 3) Baseline 4)
Execute Test Cases.
Unit Testing Tools
NUnit
JUnit
PHPunit
Parasoft Jtest
EMMA
Junit: Junit is a free to use testing tool used for Java programming language. It provides assertions to
identify test method. This tool test data first and then inserted in the piece of code.
NUnit: NUnit is widely used unit-testing framework use for all .net languages. It is an open source
tool which allows writing scripts manually. It supports data-driven tests which can run in parallel.
Parasoft Jtest: Parasoft Jtestis open source Unit testing tool. It is a code coverage tool with line and
path metrics. It allows mocking API with recording and verification syntax. This tool offers Line
coverage, Path Coverage, and Data Coverage.
EMMA: EMMA is an open-source toolkit for analyzing and reporting code written in Java language.
Emma support coverage types like method, line, basic block. It is Java-based so it is without external
library dependencies and can access the source code.
PHPUnit: PHPUnit is a unit testing tool for PHP programmer. It takes small portions of code which
is called units and test each of them separately. The tool also allows developers to use pre-define
assertion methods to assert that a system behave in a certain manner.
Test Driven Development (TDD) & Unit Testing
Unit testing in TDD involves an extensive use of testing frameworks. A unit test framework is used in
order to create automated unit tests. Unit testing frameworks are not unique to TDD, but they are
essential to it. Below we look at some of what TDD brings to the world of unit testing:
Developers looking to learn what functionality is provided by a unit and how to use it can
look at the unit tests to gain a basic understanding of the unit API.
Unit testing allows the programmer to refactor code at a later date, and make sure the module
still works correctly (i.e. Regression testing). The procedure is to write test cases for all
functions and methods so that whenever a change causes a fault, it can be quickly identified
and fixed.
Due to the modular nature of the unit testing, we can test parts of the project without waiting
for others to be completed.
Unit Testing Disadvantages
Unit testing can’t be expected to catch every error in a program. It is not possible to
evaluate all execution paths even in the most trivial programs
Unit testing by its very nature focuses on a unit of code. Hence it can’t catch integration
errors or broad system level errors.
Integration Testing
Integration Testing is defined as a type of testing where software modules are integrated logically and
tested as a group. A typical software project consists of multiple software modules, coded by different
programmers. The purpose of this level of testing is to expose defects in the interaction between these
software modules when they are integrated
Integration Testing focuses on checking data communication amongst these modules. Hence it is also
termed as ‘I & T’ (Integration and Testing), ‘String Testing’ and sometimes ‘Thread Testing’.
Once all the components or modules are working independently, then we need to check the data flow
between the dependent modules is known as integration testing.
Reason Behind Integration Testing
1. Each module is designed by individual software developer whose programming logic
may differ from developers of other modules so; integration testing becomes essential to
determine the working of software modules.
2. To check the interaction of software modules with the database whether it is an erroneous
or not.
3. Requirements can be changed or enhanced at the time of module development. These new
requirements may not be tested at the level of unit testing hence integration testing
becomes mandatory.
4. Incompatibility between modules of software could create errors.
5. To test hardware's compatibility with software.
6. If exception handling is inadequate between modules, it can create bugs.
Integration Testing Techniques
Black Box Testing
Incremental Approach
In the Incremental Approach, modules are added in ascending order one by one or according to need.
The selected modules must be logically related. Generally, two or more than two modules are added
and tested to determine the correctness of functions. The process continues until the successful testing
of all the modules.
Top-Down approach
Bottom-Up approach
Top-Down Approach
The top-down testing strategy deals with the process in which higher level modules are tested with
lower level modules until the successful completion of testing of all the modules. Major design flaws
can be detected and fixed early because critical modules tested first. In this type of method, we will
add the modules incrementally or one by one and check the data flow in the same order.
In the top-down approach, we will be ensuring that the module we are adding is the child of the
previous one like Child C is a child of Child B and so on as we can see in the below image:
Advantages:
In the bottom-up method, we will ensure that the modules we are adding are the parent of the
previous one as we can see in the below image:
Advantages
Critical modules are tested last due to which the defects can occur.
There is no possibility of an early prototype.
Hybrid Testing Method
In this approach, both Top-Down and Bottom-Up approaches are combined for testing. In this
process, top-level modules are tested with lower level modules and lower level modules tested with
high-level modules simultaneously. There is less possibility of occurrence of defect because each
module interface is tested.
Advantages
The hybrid method provides features of both Bottom Up and Top Down methods.
It is most time reducing method.
It provides complete testing of all modules.
Disadvantages
This method needs a higher level of concentration as the process carried out in both
directions simultaneously.
Complicated method.
Non- incremental integration testing
We will go for this method, when the data flow is very complex and when it is difficult to find who is
a parent and who is a child. And in such case, we will create the data in any module bang on all other
existing modules and check if the data is present. Hence, it is also known as the Big bang method.
Big Bang Method
In this approach, testing is done via integration of all modules at once. It is convenient for small
software systems, if used for large software systems identification of defects is difficult.
Since this testing can be done after completion of all modules due to that testing team has less time for
execution of this process so that internally linked interfaces and high-risk critical modules can be
missed easily.
Advantages:
Identification of defects is difficult because finding the error where it came from is a
problem, and we don't know the source of the bug.
Small modules missed easily.
Time provided for testing is very less.
We may miss to test some of the interfaces.
Entry and Exit Criteria of Integration Testing
Entry and Exit Criteria to Integration testing phase in any software development model
Entry Criteria:
First, determine the Integration Test Strategy that could be adopted and later prepare the test
cases and test data accordingly.
Study the Architecture design of the Application and identify the Critical Modules. These
need to be tested on priority.
Obtain the interface designs from the Architectural team and create test cases to verify all of
the interfaces in detail. Interface to database/external hardware/software application must be
tested in detail.
After the test cases, it’s the test data which plays the critical role.
Always have the mock data prepared, prior to executing. Do not select test data while
executing the test cases.
Validation Testing
Validation is determining if the system complies with the requirements and performs functions for
which it is intended and meets the organization’s goals and user needs.
Validation test, it takes great responsibility as you need to test all the critical business requirements
based on the user’s needs. There should not be even a single miss on the requirements asked by the
user. Hence a keen knowledge of validation testing is much important.
Validation Testing ensures that the product actually meets the client's needs. It can also be defined as
to demonstrate that the product fulfills its intended use when deployed on appropriate environment.
Design Qualification: This includes creating the test plan based on the business
requirements. All the specifications need to be mentioned clearly.
Installation Qualification: This includes software installation based on the requirements.
Operational Qualification: This includes the testing phase based on the User
requirement specification.
This may include Functionality testing:
Testing the fully integrated applications including external peripherals in order to check how
components interact with one another and with the system as a whole. This is also called End
to End testing scenario.
Verify thorough testing of every input in the application to check for desired outputs.
Testing of the user’s experience with the application.
System Testing Hierarchy
As with almost any software engineering process, software testing has a prescribed order in which
things should be done. The following is a list of software testing categories arranged in chronological
order. These are the steps taken to fully test new software in preparation for marketing it:
Unit testing performed on each module or block of code during development. Unit
Testing is normally done by the programmer who writes the code.
Integration testing done before, during and after integration of a new module into the
main software package. This involves testing of each individual code module. One piece
of software can contain several modules which are often created by several different
programmers. It is crucial to test each module’s effect on the entire program model.
System testing done by a professional testing agent on the completed software product
before it is introduced to the market.
Acceptance testing – beta testing of the product done by the actual end users.
Types of System Testing
1. Usability Testing – mainly focuses on the user’s ease to use the application, flexibility in
handling controls and ability of the system to meet its objectives
2. Load Testing – is necessary to know that a software solution will perform under real-life
loads.
3. Regression Testing – involves testing done to make sure none of the changes made over
the course of the development process have caused new bugs. It also makes sure no old
bugs appear from the addition of new software modules over time.
4. Recovery Testing – is done to demonstrate a software solution is reliable, trustworthy
and can successfully recoup from possible crashes.
5. Migration Testing – is done to ensure that the software can be moved from older system
infrastructures to current system infrastructures without any issues.
6. Functional Testing – Also known as functional completeness testing, Functional
Testing involves trying to think of any possible missing functions. Testers might make a
list of additional functionalities that a product could have to improve it during functional
testing.
7. Hardware/Software Testing – IBM refers to Hardware/Software testing as “HW/SW
Testing”. This is when the tester focuses his/her attention on the interactions between the
hardware and software during system testing.
System Testing Process
Test Environment Setup: Create testing environment for the better quality testing.
Create Test Case: Generate test case for the testing process.
Create Test Data: Generate the data that is to be tested.
Execute Test Case: After the generation of the test case and the test data, test cases are
executed.
Defect Reporting: Defects in the system are detected.
Regression Testing: It is carried out to test the side effects of the testing process.
Log Defects: Defects are fixed in this step.
Retest: If the test is not successful then again test is performed.
Regression Testing
Regression testing is performed under system testing to confirm and identify that if there's any defect
in the system due to modification in any other part of the system. It makes sure, any changes done
during the development process have not introduced a new defect and also gives assurance; old
defects will not exist on the addition of new software over the time.
Tools used for System Testing :
1. JMeter
2. Gallen Framework
3. Selenium
Advantages of System Testing :
The testers do not require more knowledge of programming to carry out this testing.
It will test the entire product or software so that we will easily detect the errors or defects
which cannot be identified during the unit testing and integration testing.
The testing environment is similar to that of the real time production or business environment.
It checks the entire functionality of the system with different test scripts and also it covers the
technical and business requirements of clients.
After this testing, the product will almost cover all the possible bugs or errors and hence the
development team will confidently go ahead with acceptance testing.
Disadvantages of System Testing :
This testing is time consuming process than another testing techniques since it checks the
entire product or software.
The cost for the testing will be high since it covers the testing of entire software.
It needs good debugging tool otherwise the hidden errors will not be found.
Debugging
In the development process of any software, the software program is religiously tested, troubleshot,
and maintained for the sake of delivering bug-free products. There is nothing that is error-free in the
first go.
So, it's an obvious thing to which everyone will relate that as when the software is created, it contains
a lot of errors; the reason being nobody is perfect and getting error in the code is not an issue, but
avoiding it or not preventing it, is an issue!
All those errors and bugs are discarded regularly, so we can conclude that debugging is nothing but a
process of eradicating or fixing the errors contained in a software program.
Debugging works stepwise, starting from identifying the errors, analyzing followed by removing the
errors. Whenever a software fails to deliver the result, we need the software tester to test the
application and solve it.
Since the errors are resolved at each step of debugging in the software testing, so we can conclude that
it is a tiresome and complex task regardless of how efficient the result was.
Why do we need Debugging?
Debugging gets started when we start writing the code for the software program. It progressively
starts continuing in the consecutive stages to deliver a software product because the code gets merged
with several other programming units to form a software product.
Following are the benefits of Debugging:
For a better understanding of a system, it is necessary to study the system in depth. It makes it
easier for the debugger to fabricate distinct illustrations of such systems that are needed to be
debugged.
The backward analysis analyzes the program from the backward location where the failure
message has occurred to determine the defect region. It is necessary to learn the area of
defects to understand the reason for defects.
In the forward analysis, the program tracks the problem in the forward direction by utilizing
the breakpoints or print statements incurred at different points in the program. It emphasizes
those regions where the wrong outputs are obtained.
To check and fix similar kinds of problems, it is recommended to utilize past experiences.
The success rate of this approach is directly proportional to the proficiency of the debugger.
Debugging Tools
The debugging tool can be understood as a computer program that is used to test and debug several
other programs. Presently, there are many public domain software such as gdb and dbx in the market,
which can be utilized for debugging. These software offers console-based command-line interfaces.
Some of the automated debugging tools include code-based tracers, profilers, interpreters, etc.
Here is a list of some of the widely used debuggers:
Radare2
WinDbg
Valgrind
Radare2
Radare2 is known for its reverse engineering framework as well as binary analysis. It is made up of a
small set of utilities, either utilized altogether or independently from the command line. It is also
known as r2.
It is constructed around disassembler for computer software for generating assembly language source
code from machine-executable code. It can support a wide range of executable formats for distinct
architectures of processors and operating systems.
WinDbg
WinDbg is a multipurpose debugging tool designed for Microsoft Windows operating system. This
tool can be used to debug the memory dumps created just after the Blue Screen of Death that further
arises when a bug check is issued. Besides, it is also helpful in debugging the user-mode crash dumps,
which is why it is called post-mortem debugging.
Valgrind
The Valgrind exist as a tool suite that offers several debugging and profiling tools to facilitate users in
making faster and accurate program. Memcheck is one of its most popular tools, which can
successfully detect memory-related errors caused in C and C++ programs as it may crash the program
and result in unpredictable behavior.
White-Box Testing
White box is used because of the internal perspective of the system. The clear box or white box or
transparent box name denote the ability to see through the software's outer shell into its inner
workings.
Developers do white box testing. In this, the developer will test every line of the code of the program.
The developers perform the White-box testing and then send the application or the software to the
testing team, where they will perform the black box testing and verify the application along with the
requirements and identify the bugs and sends it to the developer.
The developer fixes the bugs and does one round of white box testing and sends it to the testing team.
Here, fixing the bugs implies that the bug is deleted, and the particular feature is working fine on the
application.
Here, the test engineers will not include in fixing the defects for the following reasons:
Fixing the bug might interrupt the other features. Therefore, the test engineer should
always find the bugs, and developers should still be doing the bug fixes.
If the test engineers spend most of the time fixing the defects, then they may be unable to
find the other bugs in the application.
The white box testing contains various tests, which are as follows:
Path testing
Loop testing
Condition testing
Testing based on the memory perspective
Test performance of the program
Path testing
In the path testing, we will write the flow graphs and test all independent paths. Here writing the flow
graph implies that flow graphs are representing the flow of the program and also show how every
program is added with one another as we can see in the below image:
And test all the independent paths implies that suppose a path from main() to function G, first set the
parameters and test if the program is correct in that particular path, and in the same way test all other
paths and fix the bugs.
Loop testing
In the loop testing, we will test the loops such as while, for, and do-while, etc. and also check for
ending condition if working correctly and if the size of the conditions is enough.
For example: we have one program where the developers have given about 50,000 loops.
{
while(50,000)
……
……
}
We cannot test this program manually for all the 50,000 loops cycle. So we write a small program that
helps for all 50,000 cycles, as we can see in the below program, that test P is written in the similar
language as the source code program, and this is known as a Unit test. And it is written by the
developers only.
Test P
{
……
…… }
As we can see in the below image that, we have various requirements such as 1, 2, 3, 4. And then, the
developer writes the programs such as program 1,2,3,4 for the parallel conditions. Here the
application contains the 100s line of codes.
The developer will do the white box testing, and they will test all the five programs line by line of
code to find the bug. If they found any bug in any of the programs, they will correct it. And they again
have to test the system then this process contains lots of time and effort and slows down the product
release time.
Now, suppose we have another case, where the clients want to modify the requirements, then the
developer will do the required changes and test all four program again, which take lots of time and
efforts.
These issues can be resolved in the following ways:
In this, we will write test for a similar program where the developer writes these test code in the
related language as the source code. Then they execute these test code, which is also known as unit
test programs. These test programs linked to the main program and implemented as programs.
Therefore, if there is any requirement of modification or bug in the code, then the developer makes
the adjustment both in the main program and the test program and then executes the test program.
Condition testing
In this, we will test all logical conditions for both true and false values; that is, we will verify for
both if and else condition.
For example:
if(condition) - true
{
…..
}
else - false
{
…..
}
The above program will work fine for both the conditions, which means that if the condition is
accurate, and then else should be false and conversely.
Generic steps of white box testing
Design all test scenarios, test cases and prioritize them according to high priority number.
This step involves the study of code at runtime to examine the resource utilization, not
accessed areas of the code, time taken by various methods and operations and so on.
In this step testing of internal subroutines takes place. Internal subroutines such as
nonpublic methods, interfaces are able to handle all types of data appropriately or not.
This step focuses on testing of control statements like loops and conditional statements to
check the efficiency and accuracy for different data inputs.
In the last step white box testing includes security testing to check all possible security
loopholes by looking at how the code handles security.
Reasons for white box testing
White box testing is too much time consuming when it comes to large-scale programming
applications.
White box testing is much expensive and complex.
It can lead to production error because it is not detailed by the developers.
White box testing needs professional programmers who have a detailed knowledge and
understanding of programming language and implementation.
Techniques Used in White Box Testing
Data Flow Data flow testing is a group of testing strategies that examines the control flow
Testing of programs in order to explore the sequence of variables according to the
sequence of events.
Control Control flow testing determines the execution order of statements or instructions
Flow of the program through a control structure. The control structure of a program is
Testing used to develop a test case for the program. In this technique, a particular part of
a large program is selected by the tester to set the testing path. Test cases
represented by the control graph of the program.
Branch Branch coverage technique is used to cover all branches of the control flow
Testing graph. It covers all the possible outcomes (true and false) of each condition of
decision point at least once.
Statement Statement coverage technique is used to design white box test cases. This
Testing technique involves execution of all statements of the source code at least once. It
is used to calculate the total number of executed statements in the source code,
out of total statements present in the source code.
Decision This technique reports true and false outcomes of Boolean expressions.
Testing Whenever there is a possibility of two or more outcomes from the statements
like do while statement, if statement and case statement (Control flow
statements), it is considered as decision point because there are two outcomes
either true or false.
Basis Path Testing
Basis Path Testing in software engineering is a White Box Testing method in which test cases are
defined based on flows or logical paths that can be taken through the program. The objective of basis
path testing is to define the number of independent paths, so the number of test cases needed can be
defined explicitly to maximize test coverage.
In software engineering, Basis path testing involves execution of all possible blocks in a program and
achieves maximum path coverage with the least number of test cases. It is a hybrid method of branch
testing and path testing methods.
Steps for Basis Path testing
The basic steps involved in basis path testing include
where,
Path 1: 1A-2B-3C-4D-5F-9
Path 2: 1A-2B-3C-4E-6G-7I-9
Path 3: 1A-2B-3C-4E-6H-8J-9
Step 4: Design test cases
The test cases to execute all paths above will be as follows:
Independent Paths
An independent path in the control flow graph is the one which introduces at least one new edge that
has not been traversed before the path is defined. The cyclomatic complexity gives the number of
independent paths present in a flow graph. This is because the cyclomatic complexity is used as an
upper-bound for the number of tests that should be executed in order to make sure that all the
statements in the program have been executed at least once.
Advantages of Basic Path Testing
Nested Loops – Loops within loops are called as nested loops. when testing nested loops, the number
of tested increases as level nesting increases. The following steps for testing nested loops are as
follows-
1. Start with inner loop. set all other loops to minimum values.
2. Conduct simple loop testing on inner loop.
3. Work outwards.
4. Continue until all loops tested.
Unstructured loops – This type of loops should be redesigned, whenever possible, to reflect the use
of unstructured the structured programming constructs.
Black-Box Testing
Black box testing is a technique of software testing which examines the functionality of software
without peering into its internal structure or coding. The primary source of black box testing is a
specification of requirements that is stated by the customer.
In this method, tester selects a function and gives input value to examine its functionality, and checks
whether the function is giving expected output or not. If the function produces correct output, then it
is passed in testing, otherwise failed. The test team reports the result to the development team and
then tests the next function. After completing testing of all functions if there are severe problems, then
it is given back to the development team for correction.
The black box test is based on the specification of requirements, so it is examined in the
beginning.
In the second step, the tester creates a positive test scenario and an adverse test scenario by
selecting valid and invalid input values to check that the software is processing them correctly
or incorrectly.
In the third step, the tester develops various test cases such as decision table, all pairs test,
equivalent division, error estimation, cause-effect graph, etc.
The fourth phase includes the execution of all test cases.
In the fifth step, the tester compares the expected output against the actual output.
In the sixth and final step, if there is any flaw in the software, then it is cured and tested again.
Types of Black Box Testing
There are many types of Black Box Testing but the following are the prominent ones –
Functional testing – This black box testing type is related to the functional requirements of a
system; it is done by software testers.
Non-functional testing – This type of black box testing is not related to testing of specific
functionality, but non-functional requirements such as performance, scalability, usability.
Regression testing – Regression Testing is done after code fixes, upgrades or any other
system maintenance to check the new code has not affected the existing code.
Test procedure
The test procedure of black box testing is a kind of process in which the tester has specific knowledge
about the software's work, and it develops test cases to check the accuracy of the software's
functionality.
There are various techniques used in black box testing for testing like decision table technique,
boundary value analysis technique, state transition, All-pair testing, cause-effect graph technique,
equivalence partitioning technique, error guessing technique, use case technique and user story
technique. All these techniques have been explained in detail within the tutorial.
Test cases
Test cases are created considering the specification of the requirements. These test cases are generally
created from working descriptions of the software including requirements, design parameters, and
other specifications. For the testing, the test designer selects both positive test scenario by taking valid
input values and adverse test scenario by taking invalid input values to determine the correct output.
Test cases are mainly designed for functional testing but can also be used for non-functional testing.
Test cases are designed by the testing team, there is not any involvement of the development team of
software.
Techniques Used in Black Box Testing
Decision Table Decision Table Technique is a systematic approach where various input
Technique combinations and their respective system behavior are captured in a tabular
form. It is appropriate for the functions that have a logical relationship
between two and more than two inputs.
Boundary Value Boundary Value Technique is used to test boundary values, boundary values
Techniqu are those that contain the upper and lower limit of a variable. It tests, while
entering boundary value whether the software is producing correct output or
not.
State Transition State Transition Technique is used to capture the behavior of the software
Technique application when different input values are given to the same function. This
applies to those types of applications that provide the specific number of
attempts to access the application.
All-pair Testing All-pair testing Technique is used to test all the possible discrete
Technique combinations of values. This combinational method is used for testing the
application that uses checkbox input, radio button input, list box, text box, etc.
Error Guessing Error guessing is a technique in which there is no specific method for
Technique identifying the error. It is based on the experience of the test analyst, where
the tester uses the experience to guess the problematic areas of the software.
Use Case Use case Technique used to identify the test cases from the beginning to the
Technique end of the system as per the usage of the system. By using this technique, the
test team creates a test scenario that can exercise the entire software based on
the functionality of each function from start to end.
Any change in the software configuration Items will affect the final product. Therefore, changes to
configuration items need to be controlled and managed.
Tasks in SCM process
Configuration Identification
Baselines
Change Control
Configuration Status Accounting
Configuration Audits and Reviews
Configuration Identification:
Configuration identification is a method of determining the scope of the software system. With the
help of this step, you can manage or control something even if you don’t know what it is. It is a
description that contains the CSCI type (Computer Software Configuration Item), a project identifier
and version information.
Activities during this process:
Identification of configuration Items like source code modules, test case, and
requirements specification.
Identification of each CSCI in the SCM repository, by using an object-oriented approach
The process starts with basic objects which are grouped into aggregate objects. Details of
what, why, when and by whom changes in the test are made
Every object has its own features that identify its name that is explicit to all other objects
List of resources required such as the document, the file, tools, etc.
Example:
Instead of naming a File login.php its should be named login_v1.2.php where v1.2 stands for the
version number of the file
Instead of naming folder “Code” it should be named “Code_D” where D represents code should be
backed up daily.
Baseline:
A baseline is a formally accepted version of a software configuration item. It is designated and fixed
at a specific time while conducting the SCM process. It can only be changed through formal change
control procedures. In simple words, baseline means ready for release.
Activities during this process:
Control ad-hoc change to build stable software development environment. Changes are
committed to the repository
The request will be checked based on the technical merit, possible side effects and overall
impact on other configuration objects.
It manages changes and making configuration items available during the software lifecycle
Configuration Status Accounting:
Configuration status accounting tracks each release during the SCM process. This stage involves
tracking what each version has and the changes that lead to this version.
Activities during this process:
Keeps a record of all the changes made to the previous baseline to reach a new baseline
Identify all items to define the software configuration
Monitor status of change requests
Complete listing of all changes since the last baseline
Allows tracking of progress to next baseline
Allows to check previous releases/versions to be extracted for testing
Configuration Audits and Reviews:
Software Configuration audits verify that all the software product satisfies the baseline needs. It
ensures that what is built is what is delivered.
Activities during this process:
Configuration auditing is conducted by auditors by checking that defined processes are being
followed and ensuring that the SCM goals are satisfied.
To verify compliance with configuration control standards. auditing and reporting the changes
made
SCM audits also ensure that traceability is maintained during the process.
Ensures that changes made to a baseline comply with the configuration status reports
Validation of completeness and consistency
Participant of SCM process:
1. Configuration Manager
The developer needs to change the code as per standard development activities or change
requests. He is responsible for maintaining configuration of code.
The developer should check the changes and resolves conflicts
3. Auditor
The SCMP can follow a public standard like the IEEE 828 or organization specific
standard
It defines the types of documents to be management and a document naming. Example
Test_v1
SCMP defines the person who will be responsible for the entire SCM process and
creation of baselines.
Fix policies for version management & change control
Define tools which can be used during the SCM process
Configuration management database for recording configuration information.
Software Configuration Management Tools
Concurrency Management:
When two or more tasks are happening at the same time, it is known as concurrent operation.
Concurrency in context to SCM means that the same file being edited by multiple persons at the same
time. If concurrency is not managed correctly with SCM tools, then it may create many pressing
issues.
Version Control:
SCM uses archiving method or saves every change made to file. With the help of archiving or save
feature, it is possible to roll back to the previous version in case of issues.
Importance of SCM
It is practical in controlling and managing the access to various SCIs e.g., by preventing the two
members of a team for checking out the same component for modification at the same time.
SCM Repository
Software Configuration Management (SCM) is any kind of practice that tracks and provides control
over changes to source code. Software developers sometimes use revision control software to
maintain documentation and configuration files as well as source code. Revision control may
also track changes to configuration files.
As teams design, develop and deploy software, it is common for multiple versions of the same
software to be deployed in different sites and for the software's developers to be working
simultaneously on updates. Bugs or features of the software are often only present in certain versions.
Therefore, for the purposes of locating and fixing bugs, it is vitally important to be able to retrieve and
run different versions of the software to determine in which version the problem occurs. It may also
be necessary to develop two versions of the software concurrently (for instance, where one version
has bugs fixed, but no new features, while the other version is where new features are worked.
At the simplest level, developers could simply retain multiple copies of the different versions of the
program, and label them appropriately. This simple approach has been used in many large software
projects. While this method can work, it is inefficient as many near-identical copies of the program
have to be maintained. This requires a lot of self-discipline on the part of developers and often leads
to mistakes. Since the code base is the same, it also requires granting read-write-execute permission to
a set of developers, and this adds the pressure of someone managing permissions so that the code base
is not compromised, which adds more complexity. Consequently, systems to automate some or all of
the revision control process have been developed. This ensures that the majority of management of
version control steps is hidden behind the scenes.
Moreover, in software development, legal and business practice and other environments, it has
become increasingly common for a single document or snippet of code to be edited by a team, the
members of which may be geographically dispersed and may pursue different and even contrary
interests. Sophisticated revision control that tracks and accounts for ownership of changes to
documents and code may be extremely helpful or even indispensable in such situations.
o Synchronization
We can synchronize our code so programmers can get the latest code and also able to fetch up
the updated code at any time from the respiratory.
o Short and Long term undo
In some cases, when the file gets really mesh up, we are able to do short-term undo to the last
version, or we can do the long-term undo, which would roll back to the previous version. We
are also able to track the changes, we can see the commit changes for the changes that they
have done. We are also able to see the ownership of the commits that have been made on the
branch.
o Track changes
We can track our changes and we can also track the changes when someone makes any
changes. We will be able to see their commits for the changes that they have done.
o Ownership
We are able to see the ownership of the commits that have been made on the branch, basically
on the master branch.
o Branching and merging
We can do branching and merging, which is very important in source code management,
where we can create for our source code to create our own changes on it and then merge back
it into our master branch.
SCM Process
It uses the tools which keep that the necessary change has been implemented adequately to the
appropriate component. The SCM process defines a number of tasks:
Identification
Basic Object: Unit of Text created by a software engineer during analysis, design, code, or test.
Aggregate Object: A collection of essential objects and other aggregate objects. Design Specification
is an aggregate object.
Each object has a set of distinct characteristics that identify it uniquely: a name, a description, a list of
resources, and a "realization."
Version Control
Version Control combines procedures and tools to handle different version of configuration objects
that are generated during the software process.
Clemm defines version control in the context of SCM: Configuration management allows a user to
specify the alternative configuration of the software system through the selection of appropriate
versions. This is supported by associating attributes with each software version, and then allowing a
configuration to be specified [and constructed] by describing the set of desired attributes.
Change Control
James Bach describes change control in the context of SCM is: Change Control is Vital. But the
forces that make it essential also make it annoying.
We worry about change because a small confusion in the code can create a big failure in the product.
But it can also fix a significant failure or enable incredible new capabilities. We worry about change
because a single rogue developer could sink the project, yet brilliant ideas originate in the mind of
those rogues, and
A burdensome change control process could effectively discourage them from doing creative work. A
change request is submitted and calculated to assess technical merit; potential side effects, the overall
impact on other configuration objects and system functions, and projected cost of the change.
The results of the evaluations are presented as a change report, which is used by a change control
authority (CCA) - a person or a group who makes a final decision on the status and priority of the
change. The "check-in" and "check-out" process implements two necessary elements of change
control-access control and synchronization control.
Configuration Audit
SCM audits to verify that the software product satisfies the baselines requirements and ensures that
what is built and what is delivered.
SCM audits also ensure that traceability is maintained between all CIs and that all work requests are
associated with one or more CI modification. SCM audits are the "watchdogs" that ensures that the
integrity of the project's scope is preserved.
Status Reporting
Configuration Status reporting (sometimes also called status accounting) providing accurate status and
current configuration data to developers, testers, end users, customers and stakeholders through admin
guides, user guides, FAQs, Release Notes, Installation Guide, Configuration Guide, etc.
Types of Supply Chain Models
Continuous Flow Model: One of the more traditional supply chain methods, this model is
often best for mature industries. The continuous flow model relies on a manufacturer
producing the same good over and over and expecting customer demand will little variation.
Agile Model: This model is best for companies with unpredictable demand or customer-order
products. This model prioritizes flexibility, as a company may have a specific need at any
given moment and must be prepared to pivot accordingly.
Fast Model: This model emphasizes the quick turnover of a product with a short life cycle.
Using a fast chain model, a company strives to capitalize on a trend, quickly produce goods,
and ensure the product is fully sold before the trend ends.
Flexible Model: The flexible model works best for companies impacted by seasonality. Some
companies may have much higher demand requirements during peak season and low volume
requirements in others. A flexible model of supply chain management makes sure production
can easily be ramped up or wound down.
Efficient Model: For companies competing in industries with very tight profit margins, a
company may strive to get an advantage by making their supply chain management process
the most efficient. This includes utilizing equipment and machinery in the most ideal ways in
addition to managing inventory and processing orders most efficiently.
Custom Model: If any model above doesn't suit a company's needs, it can always turn
towards a custom model. This is often the case for highly specialized industries with high
technical requirements such as an automobile manufacturer.
Example of SCM
Understanding the importance of SCM to its business, Walgreens Boots Alliance Inc. decided to
transform its supply chain by investing in technology to streamline the entire process. For several
years the company has been investing and revamping its supply chain management process.
Walgreens was able to use big data to help improve its forecasting capabilities and better manage the
sales and inventory management processes.