SE-Unit - 1 Notes
SE-Unit - 1 Notes
Software engineering stands for the term made of two words, Software, and
Engineering.
Software is more than just a program code. A program is an executable code, which serves some
computational purpose. Software is considered to be a collection of executable programming
code, associated libraries, and documentation. Software, when made for a specific requirement is
called a software product.
Engineering, on the other hand, is all about developing products, using well-defined,
scientific principles and methods.
Software engineering is an engineering branch associated with the development of software
products using well-defined scientific principles, methods, and procedures. The outcome of
software engineering is an efficient and reliable software product.
The software takes the Dual role of Software. It is a Product and at the same time a
vehicle for delivering a product.
The software delivers the most important product of our time is called information
Defining Software
Software is defined as
1. Instructions
2. Data structures
3. Documents
Characteristics of software
The software has characteristics that are considerably different than those of hardware:
1) Software is developed or engineered; it is not manufactured in the Classical Sense.
Software Engineering
2) The software doesn’t “Wear Out”
• System software
• Application software
• Engineering/scientific software
• Embedded software
Software Engineering
• Product-line software
• Web applications
• Open-world computing
• Net-sourcing
• Open Source
Legacy Software
• Legacy software is older programs that are developed decades ago.
• The quality of legacy software is poor because it has an extensible design, convoluted
code, poor and nonexistent documentation, test cases, and results that are not
achieved.
As time passes legacy systems evolve due to following reasons:
• The software must be adapted to meet the needs of a new computing environment
or technology.
• The software must be enhanced to implement new business requirements.
• The software must be extended to make it interoperable with more modern systems
or database
• The software must be re-architectured to make it viable within a network environment.
Software Engineering:
Software Engineering
IEEE has developed a more comprehensive definition:
1) Software engineering is the application of a systematic, disciplined,
quantifiable approach to the development, operation, and maintenance of
software.
2) The study approaches as in (1)
Software Engineering is a layered technology. Software Engineering encompasses a
Process, Methods for managing and engineering software and tools.
Software Engineering
software engineering, is established.
A task focuses on a small, but well-defined objective (e.g., conducting a unit test) that
produces a tangible outcome.
A process framework establishes the foundation for a complete software engineering
process by identifying a small number of framework activities that apply to all software projects,
regardless of their size or complexity. In addition, the process framework encompasses a set of
umbrella activities that are applicable across the entire software process.
A generic process framework for software engineering encompasses five activities:
● Communication
● Planning
● Modeling
● Construction
● Deployment
These five generic framework activities can be used during the development of small, simple
programs, the creation of large Web applications, and for the engineering of large, complex
computer-based systems.
Software engineering process framework activities are complemented by several Umbrella
Activities. In general, umbrella activities are applied throughout a software project and help a
software team manage and control progress, quality, change, and risk. Typical umbrella activities
Software Engineering
include:
● Risk management
● Technical reviews
● Measurement
● Reusability management
Software Myths
Software Myths- beliefs about software and the process used to build it - can be traced to
the earliest days of computing.
Management Myths:
Managers with software responsibility, like managers in most disciplines, are often
under pressure to maintain budgets, keep schedules from slipping, and improve quality. Like a
drowning person who grasps at a straw, a software manager often grasps at belief in a software
myth.
Myth: We already have a book that’s full of standards and procedures for building software.
Won’t that provide my people with everything they need to
know? Reality:
• The book of standards may very well exist, but is it used?
• Are software practitioners aware of its existence?
• Does it reflect modern software engineering practice?
• Is it complete?
Software Engineering
• Is it adaptable?
• Is it streamlined to improve time to delivery while still maintaining a focus on
Quality? In many cases, the answer to this entire question is NO.
Myth: If we get behind schedule, we can add more programmers and catch up
Reality: Software development is not a mechanistic process like manufacturing. “Adding people
to a late software project makes it later.” At first, this statement may seem counterintuitive.
However, as new people are added, people who were working must spend time
educating the newcomers, thereby reducing the amount of time spent on product development
effort
Myth: If we decide to outsource the software project to a third party, I can just relax and let
that firm build it.
Reality: If an organization does not understand how to manage and control software
projects internally, it will invariably struggle when it out source’s software project.
Customer Myths
A customer who requests computer software may be a person at the next desk, a technical
group down the hall, the marketing /sales department, or an outside company that has requested
software under contract.
Myth: A general statement of objectives is sufficient to begin writing programs - we can fill in
details later.
Reality: Although a comprehensive and stable statement of requirements is not always possible,
an ambiguous statement of objectives is a recipe for disaster. Unambiguous requirements are
developed only through effective and continuous communication between customers and
developers.
Myth: Project requirements continually change, but change can be easily accommodated
because software is flexible.
Reality: Software requirement changes, but the impact of change indeed varies with the time at
which it is introduced. When requirement changes are requested early, the cost impact is
relatively small. However, as time passes, cost impact grows rapidly – resources have been
committed, a design framework has been established, and change can cause upheaval that
requires additional resources and major design modification.
Practitioner's myths.
Myth: Once we write the program and get it to work, our job is done.
Software Engineering
Reality: Someone once said that "the sooner you begin 'writing code', the longer it'll take you to
get done.” Industry data indicate that between 60 and 80 percent of all effort expended on
software will be expended after it is delivered to the customer for the first time.
Myth: Until I get the program "running" I have no way of assessing its quality.
Reality: One of the most effective software quality assurance mechanisms can be applied from
the inception of a project—the formal technical review. Software reviews are a "quality filter"
that is more effective than testing for finding certain classes of software defects.
Myth: The only deliverable work product for a successful project is the working program.
Reality: A working program is only one part of a software configuration that includes many
elements. Documentation provides a foundation for successful engineering and, more
importantly, guidance for software support.
Myth: Software engineering will make us create voluminous and unnecessary
documentation and will invariably slow us down.
Reality: Software engineering is not about creating documents. It is about creating quality. Better
quality leads to reduced rework. And reduced rework results in faster delivery times. Many
software professionals recognize the fallacy of the myths just described. Regrettably, habitual
attitudes and methods foster poor management and technical practices, even when reality dictates
a better approach. Recognition of software realities is the first step toward the formulation of
practical solutions for software engineering.
PROCESS MODELS
A GENERIC PROCESS MODEL
Software Engineering
The software process is represented schematically in the following figure. Each framework activity is
populated by a set of software engineering actions. Each software engineering action is defined by a
task set that identifies the work tasks that are to be completed, the work products that will be produced,
the quality assurance points that will be required, and the milestones that will be used to indicate
progress.
In addition, a set of umbrella activities project tracking and control, risk management, quality
assurance, configuration management, technical reviews, and others are applied throughout the
process.
This aspect is called process flow. It describes how the framework activities and the actions and
tasks that occur within each framework activity are organized concerning sequence and time and
is illustrated in the following figure
Software Engineering
A generic process framework for software engineering A linear process flow executes each of
the five framework activities in sequence
An iterative process flow repeats one or more of the activities before proceeding to the next. An
evolutionary process flow executes the activities in a “circular” manner. A parallel process flow
executes one or more activities in parallel with other activities
Defining a Framework Activity
A software team would need significantly more information before it could properly execute any
one of these activities as part of the software process. Therefore, you are faced with a key
question: What actions are appropriate for a framework activity, given the nature of the problem
to be solved, the characteristics of the people doing the work, and the stakeholders who are
sponsoring the project?
Identifying a Task Set
Different projects demand different task sets. The software team chooses the task set
based on problem and project characteristics. A task set defines the actual work to be done to
accomplish the objectives of a software engineering action.
Process Patterns
A process pattern describes a process-related problem that is encountered during
software engineering work, identifies the environment in which the problem has been
encountered, and suggests one or more proven solutions to the problem. Stated in more general
terms, a process pattern provides you with a template —a consistent method for describing
problem solutions within the context of the software process.
Patterns can be defined at any level of abstraction. a pattern might be used to describe a
problem (and solution) associated with a complete process model (e.g., prototyping). In other
situations, patterns can be used to express a problem (and solution) associated with a framework
activity (e.g., planning) or an action within a framework activity (e.g., project estimating).
Ambler has proposed a template for describing a process pattern:
Pattern Name. The pattern is given a meaningful name describing it within the context of the
software process (e.g., Technical Reviews).
Forces. The environment in which the pattern is encountered and the issues that make the
problem visible may affect its solution.
Type. The pattern type is specified. Ambler suggests three types:
Software Engineering
1. Stage pattern—defines a problem associated with a framework activity for the process.
Since a framework activity encompasses multiple actions and work tasks, a staged
pattern incorporates multiple task patterns (see the following) that are relevant to the
stage (framework activity). An example of a staged pattern might be Establishing
Communication. This pattern would incorporate the task pattern Requirements
Gathering and others.
2. Task pattern—defines a problem associated with a software engineering action or
work task and relevant to successful software engineering practice (e.g., Requirements
Gathering is a task pattern).
3. Phase pattern—define the sequence of framework activities that occurs within the
process, even when the overall flow of activities is iterative. An example of a phase
pattern might be Spiral Model or Prototyping.
Initial context. Describes the conditions under which the pattern applies. Before the initiation
of the pattern:
(1) What organizational or team-related activities have already occurred?
(2) What is the entry state for the process?
(3) What software engineering information or project information already exists?
Problem. The specific problem is to be solved by the pattern.
Solution. Describes how to implement the pattern successfully. It also describes how software
engineering information or project information that is available before the initiation of the pattern
is transformed as a consequence of the successful execution of the pattern.
Resulting Context. Describes the conditions that will result once the pattern has been
successfully implemented. Upon completion of the pattern:
(1) What organizational or team-related activities must have occurred?
(2) What is the exit state for the process?
(3) What software engineering information or project information has been developed?
Related Patterns. Provide a list of all process patterns directly related to this one. This may be
represented as a hierarchy or in some other diagrammatic form.
Known Uses and Examples. Indicate the specific instances in which the pattern is applicable.
Process patterns provide an effective mechanism for addressing problems associated with any software
process. The patterns enable you to develop a hierarchical process description that begins at a high level
of abstraction (a phase pattern).
Software Engineering
THE CAPABILITY MATURITY MODEL INTEGRATION (CMMI):
Software Engineering
assurance, and change control mechanisms for each project.
The Waterfall Model
The waterfall model sometimes called the classic life cycle, suggests a systematic,
sequential approach to software development that begins with customer specification of
requirements and progresses through planning, modeling, construction, and deployment.
The waterfall model is the oldest paradigm for software engineering. The problems that
are sometimes encountered when the waterfall model is applied are:
1. Real projects rarely follow the sequential flow that the model proposes. Although
the linear model can accommodate iteration, it does so indirectly. As a result,
changes can confuse the project team proceeds.
2. It is often difficult for the customer to state all requirements explicitly. The waterfall
model requires this and has difficulty accommodating the natural uncertainty that
exists at the beginning of many projects.
3. The customer must have patience. A working version of the program(s) will not
be available until late in the project period.
This model is suitable whenever a limited number of new development efforts and when
requirements are well defined and reasonably stable.
Incremental Process Models
The incremental model delivers a series of releases, called increments, that provide
progressively more functionality for the customer as each increment is delivered.
The incremental model combines elements of linear and parallel process flows discussed
in Section 1.7. The incremental model applies linear sequences in a staggered fashion as calendar
time progresses. Each linear sequence produces deliverable “increments” of the software in a
manner that is similar to the increments produced by an evolutionary process flow.
For example, word-processing software developed using the incremental paradigm might
deliver basic file management, editing, and document production functions in the first increment;
more sophisticated editing and document production capabilities in the second increment;
spelling and grammar checking in the third increment; and advanced page layout capability in
Software Engineering
the fourth increment.
When an incremental model is used, the first increment is often a core product. That is, basic
requirements are addressed but many supplementary features remain undelivered. The core
product is used by the customer. As a result of use and/or evaluation, a plan is developed for the
next increment. The plan addresses the modification of the core product to better meet the needs
of the customer and the delivery of additional features and functionality. This process is repeated
following the delivery of each increment until the complete product is produced.
Incremental development is particularly useful when staffing is unavailable for a
complete implementation by the business deadline that has been established for the project. Early
increments can be implemented with fewer people. If the core product is well received, then
additional staff (if required) can be added to implement the next increment. In addition,
increments can be planned to manage technical risks.
Software Engineering
Evolutionary models are iterative. They are characterized in a manner that enables you to
develop increasingly more complete versions of the software with each iteration. There are two
common evolutionary process models.
Prototyping Model: Often, a customer defines a set of general objectives for software, but
does not identify detailed requirements for functions and features. In other cases, the developer
may be unsure of the efficiency of an algorithm, the adaptability of an operating system, or the
form that human-machine interaction should take. In these, and many other situations, a
prototyping paradigm may offer the best approach.
Although prototyping can be used as a stand-alone process model, it is more
commonly used as a technique that can be implemented within the context of any one of the
process models. The prototyping paradigm begins with communication. You meet with other
stakeholders to define the overall objectives for the software, identify whatever requirements are
known, and outline areas where a further definition is mandatory. A prototyping iteration is
planned quickly, and modeling (in the form of a “quick design”) occurs. A quick design
focuses on a representation of those aspects of the software that will be visible to end users.
Iteration occurs as the prototype is tuned to satisfy the needs of various stakeholders,
while at the same time enabling you to better understand what needs to be done.
Software Engineering
The prototype serves as a mechanism for identifying software requirements. If a working
prototype is to be built, you can make use of existing program fragments or apply tools that
enable working programs to be generated quickly. The prototype can serve as “the first system.”
Prototyping can be problematic for the following reasons:
1. Stakeholders see what appears to be a working version of the software, unaware that
the prototype is held together haphazardly, unaware that in the rush to get it
working you haven’t considered overall software quality or long-term
maintainability.
2. As a software engineer, you often make implementation compromises to get a
prototype working quickly. An inappropriate operating system or programming
language may be used simply because it is available and known; an inefficient
algorithm may be implemented simply to demonstrate capability.
Although problems can occur, prototyping can be an effective paradigm for software
engineering.
The Spiral Model: Originally proposed by Barry Boehm, the spiral model is an
evolutionary software process model that couples the iterative nature of prototyping with the
controlled and systematic aspects of the waterfall model. It provides the potential for rapid
development of increasingly more complete versions of the software. Boehm describes the
model in the following manner
The spiral development model is a risk-driven process model generator that is used to
guide multi-stakeholder concurrent engineering of software-intensive systems. It has two
main distinguishing features. One is a cyclic approach for incrementally growing a system’s
degree of definition and implementation while decreasing its degree of risk. The other is a set of
anchor point milestones for ensuring stakeholder commitment to feasible and mutually
satisfactory system solutions.
Using the spiral model, the software is developed in a series of evolutionary releases.
During early iterations, the release might be a model or prototype. During later iterations,
increasingly more complete versions of the engineered system are produced.
Software Engineering
Fig: The Spiral Model
A spiral model is divided into a set of framework activities defined by the software
engineering team. As this evolutionary process begins, the software team performs activities that
are implied by a circuit around the spiral in a clockwise direction, beginning at the center. Risk
is considered as each revolution is made. Anchor point milestones are a combination of work
products and conditions that are attained along the path of the spiral and are noted for each
evolutionary pass.
The first circuit around the spiral might result in the development of a product
specification; subsequent passes around the spiral might be used to develop a prototype and then
progressively more sophisticated versions of the software. Each pass through the planning region
results in adjustments to the project plan.
The spiral model can be adapted to apply throughout the life of the computer software.
Therefore, the first circuit around the spiral might represent a “concept development project”
that starts at the core of the spiral and continues for multiple iterations until concept development
is complete. The new product will evolve through several iterations around the spiral. Later, a
circuit around the spiral might be used to represent a “product enhancement project.”
A spiral model is a realistic approach to the development of large-scale systems and software.
Because software evolves as the process progresses, the developer and customer better
understand and react to risks at each evolutionary level. It maintains the systematic stepwise
approach suggested by the classic life cycle but incorporates it into an iterative framework that
more realistically reflects the real world.
Software Engineering
Concurrent Models
The concurrent development model sometimes called concurrent engineering, allows a
software team to represent iterative and concurrent elements of any of the process models. The
concurrent model is often more appropriate for product engineering projects where different
engineering teams are involved.
These models provide a schematic representation of one software engineering activity
within the modeling activity using a concurrent modeling approach.
The activity modeling may be in any one of the states noted at any given time. Similarly,
other activities, actions, or tasks (e.g., communication or construction) can be represented
analogously.
All software engineering activities exist concurrently but reside in different states.
Concurrent modeling defines a series of events that will trigger transitions from state to state for
each of the software engineering activities, actions, or tasks. This generates the event analysis
model correction, which will trigger the requirements analysis action from the done state into the
awaiting changes state.
Concurrent modeling applies to all types of software development and provides an
Software Engineering
accurate picture of the current state of a project. Each activity, action, or task on the network
exists simultaneously with other activities, actions, or tasks. Events generated at one point in the
process network trigger transitions among the states.
● Inception
● Elaboration
● Conception
● Transition
● Production
Software Engineering
The inception phase of the UP encompasses both customer communication and planning
activities. By collaborating with stakeholders, business requirements for the software are
identified; a rough architecture for the system is proposed; and a plan for the iterative,
incremental nature of the ensuing project is developed.
The elaboration phase encompasses the communication and modeling activities of the
generic process model. Elaboration refines and expands the preliminary use cases that were
developed as part of the inception phase and expands the architectural representation to include
five different views of the software—the use case model, the requirements model, the design
model, and the implementation model, and the deployment model. Elaboration creates an
“executable architectural baseline” that represents a “first cut” executable system.
The construction phase of the UP is identical to the construction activity defined for the
generic software process. Using the architectural model as input, the construction phase develops
or acquires the software components that will make each use case operational for end users. To
accomplish this, requirements and design models that were started during the elaboration phase
are completed to reflect the final version of the software increment. All necessary and required
features and functions for the software increment (i.e., the release) are then implemented in the
source code.
The transition phase of the UP encompasses the latter stages of the generic construction
activity and the first part of the generic deployment (delivery and feedback) activity. Software is
given to end users for beta testing and user feedback reports both defects and necessary
changes. After the transition phase, the software increment becomes a usable software release.
The production phase of the UP coincides with the deployment activity of the generic
process. During this phase, the ongoing use of the software is monitored, support for the
Software Engineering
operating environment (infrastructure) is provided, and defect reports and requests for changes
are submitted and evaluated. At the same time the construction, transition, and production phases
are likely being conducted, work may have already begun on the next software increment. This
means that the five UP phases do not occur in a sequence, but rather with staggered
concurrency.
● Planning
● High-level design
● Development
● Postmortem
PSP stresses the need to identify errors early and, just as important, to understand the types
of errors that you are likely to make. PSP represents a disciplined, metrics-based approach to
software engineering that may lead to culture shock for many practitioners.
Team Software Process (TSP)
Watts Humphrey extended the lessons learned from the introduction of PSP and proposed
a Team Software Process (TSP). The goal of TSP is to build a “self-directed” project team that
Software Engineering
organizes itself to produce high-quality software.
Humphrey defines the following objectives for TSP:
● Build self-directed teams that plan and track their work, establish goals, and own their
processes and plans. These can be pure software teams or integrated product teams (IPTs)
of 3 to about 20 engineers.
● Show managers how to coach and motivate their teams and how to help them sustain
peak performance.
A self-directed team has a consistent understanding of its overall goals and objectives; defines
roles and responsibilities for each team member; tracks quantitative project data (about
productivity and quality); identifies a team process that is appropriate for the project and a
strategy for implementing the process; defines local standards that apply to the team’s software
engineering work; continually assesses risk and reacts to it; and tracks, manages, and reports
project status.
TSP defines the following framework activities: project launch, high-level design,
implementation, integration and test, and postmortem. TSP makes use of a wide variety of
scripts, forms, and standards that serve to guide team members in their work. “Scripts” define
specific process activities (i.e., project launch, design, implementation, integration and system
testing, postmortem) and other more detailed work functions (e.g., development planning,
requirements development, software configuration management, unit test) that are part of the
team process.
Software Engineering
Traditional Methodology
The Waterfall Model was the first Process Model to be introduced. It is also referred to as a linear-sequential
life cycle model. It is very simple to understand and use. In a waterfall model, each phase must be completed
before the next phase can begin and there is no overlapping in the phases.
The Waterfall model is the earliest SDLC approach that was used for software development.
The waterfall Model illustrates the software development process in a linear sequential flow. This means that
any phase in the development process begins only if the previous phase is complete. In this waterfall model,
the phases do not overlap.
● System Design
the requirement specifications from first phase are studied in this phase and the system design
● Implementation
with inputs from the system design, the system is first developed in small programs called u
Software Engineering
● Integration and Testing
All the units developed in the implementation phase are integrated into a system aft
● Deployment of system
Once the functional and non-functional testing is done; the product is deployed in the c
● Maintenance
There are some issues which come up in the client environment. To fix those issues, patches are
Advantages
1. It’s very simple and easy to implement
2. Best suitable for small projects
3. Best suitable if requirements are fixed
Limitations
5. Bug fixing is very costly because we can’t identify bugs in early stages of life cycle
Agile Methodology
Agile software development refers to a group of software development methodologies based on iterative
development, where requirements and solutions evolve through collaboration between self-organizing cross-
functional teams.
Agile methods or Agile processes generally promote a disciplined project management process that
encourages frequent inspection and adaptation, a leadership philosophy that encourages teamwork, self-
organization and accountability, a set of engineering best practices intended to allow for rapid delivery of
Software Engineering
high-quality software, and a business approach that aligns development with customer needs and company
goals.
Agile development refers to any development process that is aligned with the concepts of the Agile
Manifesto. The Manifesto was developed by a group of fourteen leading figures in the software industry, and
reflects their experience of what approaches do and do not work for software development.
● Continuous delivery
● Continuous feedback
● Requirements changes in the middle
● Client satisfaction is very high
● Less development time
● Less development cost
Software Engineering
● Transfer of technology to new team members may be quite challenging due to lack of documentation.
● Our highest priority is to satisfy the customer through early and continuous delivery of valuable
software.
● Welcome changing requirements, even late in development. Agile processes harness change for the
customer's competitive a advantage.
● Deliver working software frequently, from a couple of weeks to a couple of months, with a preference
to the shorter timescale.
● Business people and developers must work together daily throughout the project.
● Build projects around motivated individuals. Give them the environment and support they need, and
trust them to get the job done.
● The most efficient and effective method of conveying information to and within a development team
is face-to-face conversation.
● Working software is the primary measure of progress.
● Agile processes promote sustainable development. The sponsors, developers, and users should be able
● To maintain a constant pace indefinitely.
● Continuous attention to technical excellence and good design enhances agility.
● Simplicity--the art of maximizing the amount of work not done--is essential.
● The best architectures, requirements, and designs emerge from self-organizing teams.
● At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its
behavior accordingly.
Among all these model scrum model is most popular and frequently used.
Scrum Model
SCRUM is agile based model and is derived from rugby game, it is iterative model not a linear sequential
model. Total software product will be developed increment by increment and each increment is called a
sprint.
Sprint: A Sprint is a time-box of one month or less. A new Sprint starts immediately after the completion of
the previous Sprint.
Sprint Planning: initiates the Sprint by laying out the work to be performed for the Sprint. This resulting
plan is created by the collaborative work of the entire Scrum Team.
Daily Scrum: the purpose of the Daily Scrum is to inspect progress toward the Sprint Goal and adapt the
Sprint Backlog as necessary, adjusting the upcoming planned work.
Release:
Sprint Review:
Software Engineering
If the product still has some non-achievable features then it will be checked in this stage and then the product
is passed to the Sprint Retrospective stage.
Sprint Retrospective:
Scrum Artifacts
Scrum’s artifacts represent work or value to provide transparency and opportunities for inspection and
adaptation. Artifacts defined by Scrum are specifically designed to maximize transparency of key information
so that everybody has the same understanding of the artifact. The Scrum Artifacts are:
● Product Backlog
● Sprint Backlog
● Increment
● Product Backlog:
Sprint Backlog:
Sprint Backlog is divided into two parts Product assigned features to sprint and Sprint planning meeting.
● Scrum frame work does not allow changes into their sprint.
Software Engineering
● Scrum framework is not fully described model. If you wanna adopt it you need to fill in the
framework with your own details like Extreme Programming (XP), Kanban, DSDM.
● It can be difficult for the Scrum to plan, structure and organize a project that lacks a clear definition.
The daily Scrum meetings and frequent reviews require substantial resources.
Similarities
Differences
● The difference will come once development of the project is completed Agile model talks about only
development but not operations. Devops model talks about complete product life cycle like
development and operations.
● In Agile model separate people are responsible for development, testing, deployment etc But in
Devops, the devops engineer is responsible for everything like development to operations and
operations to development.
What is DevOps
1. Development group
2. Operations /Non development/administrators groups
Again this classification is divided into small set of groups
Development group
Software Engineering
The people who are involving in
planning
coding
Build
Testing
are considered as development team
To develop a project persons required are for example
Design architect(DA)
Developers/coders
Build Engineer
Test Engineer/QA
Once developer develop the code we have to integrate together and we have to release into
executable code
Operations group
A project is developed ready to move to the client machine (assume that development team job
completed) so to move to the client machine configuration of software's is required.
The people how are involving in
1. Release engineer
2. Deployment engineer
3. Operations
4. monitoring are considered as operations group
eg: Release engineer
Deployment engineer
system admin
Database admin
Software Engineering
Network admin etc
The main aim of Devops is to implement collaboration between development and
operation teams.
Software Engineering
Face book comes up with dark launching technique
3. Time to market
4. Problem resolution
DEVOPS LIFECYCLE
Continuous Development:
Software Engineering
● This is the phase which involves planning and coding of the software application functionality. There
are no tools for planning as such but there are a number of tools for maintaining the code
● the vision of the project is decided during the planning phase and then the actual coding of the
application begins
● The code can be written in any language but it is maintained using version control tools these are
continuous development tools
● The tools like git enable communication between the development and the operations team
● Git is a distributed version control that supports distributed non-linear workflows by providing data
assurance for developing quality software when you are developing a large project with huge number
of collaborators
● It is very important to have communication between the collaborators while making the changes in a
project so imagine a scenario if you are a team of 10 developers and you are working on a project
● so now what happens is if a developer commits any change into the code and that change causes an
error so now how will you track down which developer made what change into code and how to solve
that error so here tools such as git is used to solve the problem related to maintain the code
● What happens in git is one central repository or the main server where the code of the application is
present and there is also one local repository were you can also have the code for your application
● Pull: you can fetch the code from the main server to local repository using pull
● Push: you can forward the code from the local repository on to the main repository or the main server
also this is the working directory or your workspace were you develop the application
● Update: you can fetch the code from local repository onto your working directory using update
Software Engineering
● Commit: you can forward your code from working directory to local repository so this an overview of
git
● The advantage of using tools like git: imagine for any reason these server crashes or is unavailable so
in such a scenario the local repository still have the code for your application
Continuous Integration:
● This is the stage where the code supporting new functionality is integrated with the existing code
since there is continuous development in software
● The updated code needs to be integrated continuously as well as smoothly with the systems to reflect
the changes to the end users
● The changed code should also ensure that there are no errors during the runtime which allows us to
test the changes and checks how it reacts with other changes so there is one very popular tool that is
used in this phase which is known as jenkins
● Using jenkins one can pull the latest code version from the git repository and produce a build which
can be initially deployed to the test servers or the production servers
● i magine a developer commits any change onto the code which is on the git repository as soon as there
is change in the code on the git repository jenkins will fetch the code and it will produce a bulid which
is an executable file which is in the form of jar file and this build can be forwarded to the next stages
that is either production servers or the test servers.
Continuous Testing:
● This is the stage were developed software is continuously tested for bugs.
● For continuous Testing automation testing tools such as selenium test ng, junit etc are used
● These tools allows the qas to test the multiple code bases thoroughly and parallel to ensure that there
are no flaws in the functionality
● In this phase you can use docker containers for simulating test environment selenium does the
automated testing and reports are generated by test ng but to automate this entire testing phase you
need a trigger and that trigger is provided by the continuous integration tool such as jenkins
● Automation testing saves a lot of time effort and labor for executing the test cases.
● Besides that report generation is a big plus the task of evaluating which test cases failed in the test root
gets simpler these can also be scheduled for execution at predefined times once the code is tested it is
Software Engineering
continuously integrated with existing code
Continuous Deployment:
It is the stage where code is deployed to the production environment here we ensure that code is correctly
deployed on all the servers
now it is the time to understand why devops will be incomplete without configuration management tools and
containerization tools both the set of tools helps us in achieving continuous deployment
Configuration management
● is the act of establishing and maintaining consistency in an application functional requirements and
performance
● It is act of releasing deployments to servers scheduling updates on all the servers and most
importantly keeping the configurations consistent across all the servers since the new code is deployed
on a continuous basis
● configuration management tools play an important role for executing task quickly and frequently
popular tools that are used puppet ,chief , Salt stack
● Containerization tools also play equally important role in the deployment stage docker and vagrant are
popular tools which helped to reduce consistency across the development test staging and production
environment
● Besides this they also help in scaling up and scaling down of instances easily it eliminates any chance
of errors or failures in the production environment by packaging and replicating the same
dependencies and packages used in development testing and staging environment
Continuous Monitoring
● The final stage in devops lifecycle is continuous monitoring this is the crucial stage in devops life
cycle which is aimed at improving the quality of the software by monitoring its performance.
● This practice involves the participation of the operations team who will monitor the user activity for
any bugs or improper behavior of the system this can also be achieved by making use of dedicated
monitoring tools which will continuously monitor the application performance and highlight the
issues some popular tools used are splunk , elk stack , nagios
● They monitor the application and the servers closely to check the health of the system proactively and
Software Engineering
they improve productivity and increase the reliability of the system reducing the i.t support costs any
major issues found could be reported to the development team so it can be fixed in continuous
development phase
● These devops stages are carried out on loop continuously until the desired product quality is achieved
DEVOPS STAGES
● The Plan stage covers everything that happens before the developers start writing code
● Requirements and feedback are gathered from stakeholders and customers and used to build a product
roadmap to guide future development.
● The product roadmap can be recorded and tracked using a ticket management system such as Jira,
Azure DevOps or Asana which provide a variety of tools that help track project progress, issues and
milestones.
● The product roadmap can be broken down into Epics, Features and User Stories, creating a backlog of
tasks that lead directly to the customers’ requirements.
● The tasks on the backlog can then be used to plan sprints and allocate tasks to the team to begin
development.
2. Code
● In addition to the standard toolkit of a software developer, the team has a standard set of plugins
installed in their development environments to aid the development process, help enforce consistent
code-styling and avoid common security flaws and code anti-patterns.
● This helps to teach developers good coding practice while aiding collaboration by providing some
consistency to the code base. These tools also help resolve issues that may fail tests later in the
pipeline, resulting in fewer failed builds.
3. Build
● Once a developer has finished a task, they commit their code to a shared code
repository. There are many ways this can be done, but typically the developer
submits a pull request — a request to merge their new code with the shared
codebase.
● Another developer then reviews the changes they’ve made, and once they’re happy there are no
issues, they approve the pull-request. This manual review is supposed to be quick and lightweight, but
it’s effective at identifying issues early.
● Simultaneously, the pull request triggers an automated process which builds the codebase and runs a
series of end-to-end, integration and unit tests to identify any regressions. If the build fails, or any of
the tests fail, the pull-request fails and the developer is notified to resolve the issue. By continuously
checking code changes into a shared repository and running builds and tests, we can minimise
Software Engineering
integration issues that arise when working on a shared codebase, and highlight breaking bugs early in
the development lifecycle.
4. Test
● Once a build succeeds, it is automatically deployed to a staging environment for deeper, out-of-band
testing.
● The staging environment may be an existing hosting service, or it could be a new environment
provisioned as part of the deployment process. This practice of automatically provisioning a new
environment at the time of deployment is referred to as Infrastructure-as-Code (IaC) and is a core part
of many DevOps pipelines.
● Once the application is deployed to the test environment, a series of manual and automated tests are
performed. Manual testing can be traditional User Acceptance Testing (UAT) where people use the
application as the customer would to highlight any issues or refinements that should be addressed
before deploying into production.
● At the same time, automated tests might run security scanning against the application, check for
changes to the infrastructure and compliance with hardening best-practices, test the performance of
the application or run load testing. The testing that is performed during this phase is up to the
organization and what is relevant to the application, but this stage can be considered a test-bed that
lets you plug in new testing without interrupting the flow of developers or impacting the production
environment.
5. Release
● The Release phase is a milestone in a DevOps pipeline — it’s the point at which we say a build is
ready for deployment into the production environment. By this stage, each code change has passed a
series of manual and automated tests, and the operations team can be confident that breaking issues
and regressions are unlikely.
● Depending on the DevOps maturity of an organization, they may choose to automatically deploy any
build that makes it to this stage of the pipeline. Developers can use feature flags to turn off new
features so they can’t be seen by the customers until they are ready for action. This model is
considered the nirvana of DevOps and is how organizations manage to deploy multiple releases of
their products every day.
Software Engineering
● Alternatively, an organization may want to have control over when builds are released to production.
They may want to have a regular release schedule or only release new features once a milestone is
met. You can add a manual approval process at the release stage which only allows certain people
within an organization to authorize a release into production.
● The tooling lets you customize this, it’s up to you how you want to go about things.
6. Deploy
● Finally, a build is ready for the big time and it is released into production. There are several tools and
processes that can automate the release process to make releases reliable with no outage window.
● The same Infrastructure-as-Code that built the test environment can be configured to build the
production environment. We already know that the test environment was built successfully, so we can
rest assured that the production release will go off without a hitch.
● A blue-green deployment lets us switch to the new production environment with no outage. Then the
new environment is built, it sits alongside the existing production environment. When the new
environment is ready, the hosting service points all new requests to the new environment. If at any
point, an issue is found with the new build, you can simply tell the hosting service to point requests
back to the old environment while you come up with a fix.
7. Operate
● The new release is now live and being used by the customers.
● The operations team is now hard at work, making sure that everything is running smoothly. Based on
the configuration of the hosting service, the environment automatically scales with load to handle
peaks and troughs in the number of active users.
● The organization has also built a way for their customers to provide feedback on their service, as well
as tooling that helps collect and triage this feedback to help shape the future development of the
product. This feedback loop is important — nobody knows what they want more than the customer,
and the customer is the world’s best testing team, donating many more hours to testing the application
than the DevOps pipeline ever could.
8. Monitor
● The ‘final’ phase of the DevOps cycle is to monitor the environment. this builds on the customer
Software Engineering
feedback provided in the Operate phase by collecting data and providing analytics on customer
behavior, performance, errors and more.
● We can also do some introspection and monitor the DevOps pipeline itself, monitoring for potential
bottlenecks in the pipeline which are causing frustration or impacting the productivity of the
development and operations teams.
● All of this information is then fed back to the Product Manager and the development team to close the
loop on the process. It would be easy to say this is where the loop starts again, but the reality is that
this process is continuous. There is no start or end, just the continuous evolution of a product
throughout its lifespan, which only ends when people move on or don’t need it any more.
Software Engineering