Compromising Between Eld Eed Using Gatool Matlab 180326093403
Compromising Between Eld Eed Using Gatool Matlab 180326093403
SUBHANKAR SAU
Assistant Professor
BACHELOR OFTECHNOLOGY
IN
Electrical & Electronics Engineering
ROURKELA
MARCH -2018
Bonafide Certificate
This is certify that the project titled “Solution of Combined Economic Emission
Dispatch Problem in Power System” is a Bonafide record of the work done by
SUBHANKAR SAU(1401216049)in partial fulfillment of the requirements for the
award of the degree of bachelor of technology in electrical and electronics engineering
of the Padmanava college of engineering Rourkela during the year2018-2019.
ii | P a g e
ACKNOWLEDGEMENT
The satisfaction that accompanies the successful completion of any task would be
incomplete without the mention of the people who made it possible and whose guidance
and encouragement crown all efforts to success.
We are extremely grateful to our respected Principal Prof. (Dr.) Ananta Kumar
Sahoofor fostering an excellent academic climate in our institution. I also express my
sincere gratitude to our respected Head of the Department Prof. P.K Panigrahifor his
guidance and encouragement in carrying out this project work and for providing all the
logistic support in bringing out this project.
We would like to convey thanks to our project supervisor Prof. Sarat Kumar Mishra for
his valuable guidance, encouragement, co-operations and kindness during the entire
duration of the course and academics.
Last but not the least, we also thank our friends and family members for helping us in
completing the project.
3|Page
ABSTRACT
Economic load dispatch problem leads to the scheduling of the generation values of a
multi-unit power plant for a particular load demand. The problem is complex due to
presence of different types of constraints like the generator limits and power balance,
minimisation of losses etc. The conventional method of solution was Lagrangian method
in combination with Gauss Siedel or Newton Raphson method. But with the rising problem
of pollution due to burning of fossil fuels in thermal power plants, it is needed to minimise
the emission from the power plants. The simultaneous minimisation of fuel cost and
emission is difficult as a minimum fuel cost may not lead to minimum emission. For such
multi-objective optimisation problems both the objectives (fuel cost and emission) cannot
be minimised simultaneously; rather a schedule has to be arrived to get a suitable
compromise between the two. Here we will formulate the problem for the standard test
cases (IEEE 14, 30 bus systems). The multi-objective problem has been solved using
weighted sum approach. The problem has been solved by using the bio-inspired genetic
algorithm as the conventional technique for solving the ELD problem is not efficient
enough for solution of multi-objective problems. The solutions were compared with the
solutions of the single objective problems for testing the effectiveness of the proposed
method.
Keywords:
Combined economic emission dispatch
Economic load dispatch
Lagrangian multiplier
Newton Raphson
Optimal power flow
Price penalty factor
4|Page
TABLE OF CONTENTS
COVER PAGE ⅰ
BONAFIDE CERTIFICATE ⅱ
ACKNOWLEDGEMENT ⅲ
ABSTRACT iv
TABLE OF CONTENTS v
LIST OF FIGURES ix
LIST OF TABLES xi
CHAPTER 1: INTRODUCTION
1.1 OVERVIEW 1
1.2 PROJECT MOTIVATION 2
1.4. OBJECTIVE 5
1.5. INTRODUCTION 5
UNCONSTRAINED MINIMIZATION
5|Page
1.6.3 ECONOMIC LOAD DISPATCH OF 6
2.2 COMPONENTS OF GA 15
2.2.1 REPRESENTATION 15
2.2.2 INITIALIZATION 17
ALGORITHM IN MATLAB
OF PREVIOUS RUN
6|Page
3.6 ANALYZING THE ACCURACY BY CHANGING THE 29
PARAMETERS
3.9 GA TOOL 33
IN POWER SYSTEM
7|Page
4.4 ECONOMIC LOAD DISPATCH NEGLECTING LOSSES :43
7.1 CONCLUSION 67
APPENDIX 68
REFERENCES 92
8|Page
LIST OF TABLES
4 4 Results of Comparison of 53
cost and emission
optimization and
compromised values
5 5 Cost, Emission 54
Coefficients, Unit
Characteristics of IEEE 14
Bus system
9|Page
8 8 Table Results of 63
Emission Dispatch on
IEEE 30 bus applying
genetic algorithm
10 10 Comparison of different 65
methods
10 | P a g e
LIST OF FIGURES
11 | P a g e
CHAPTER– 1
1.1 OVERVIEW
Engineers are always concerned with the cost of products and services. Minimizing the
operating cost is very important in all practical power systems. Economic load dispatch is the
technique in which active power outputs are allocated to committed generator units with the aim
of minimizing generation cost in compliance with all constraints of the network. The traditional
methods include Lambda-Iterative technique, Newton-Raphson Method, Gradient Method, etc.
But these conventional methods need linear incremental cost curves for the generators. In practice
the input-output curves of generators are discrete and non-linear due to ramp-rate limits, multiple
fuel effects and restricted zones of operation.
The complex ELD problem needs to be solved by modern heuristic or probabilistic search
optimizations techniques like DP (dynamic programming), GA (genetic algorithms), AI (artificial
intelligence) and particle swarm optimization. EP exhibits robustness, but sometimes shows slow
convergence near the optimum point. GA is a probabilistic algorithm. GA comes out as a better
algorithm owing to its parallel search approach which offers global optimization. Particle Swarm
Optimization was introduced by James Kennedy and Russell Eberhart. It solves non-linear hard
optimization problems. It was inspired by animal social behaviour like that of a school of fishes or
a flock of birds, etc. This technique operates without requiring information about the gradient of
objective function or error function and it easily obtains the independent best solution. PSO
technique provides high quality solution in less time and shows fast convergence.
In this report, IEEE 14&30-bus Systems are taken as test cases. Economic scheduling of six
generators is done using Lambda Iteration method & GA (Genetic Algorithm) and at the end the
fuel costs in all the three cases are compared. All the analyses are performed in MATLAB software
In recent years this problem area has taken on a suitable twist as the public has become increasingly
concerned with environmental matters, so that economic dispatch now includes the dispatch of
systems to minimize pollutants, as well as to achieve minimum cost. In addition, there is a need to
expand the limited economic optimization problem to incorporate constraints on system operation
to ensure the security of the system, thereby preventing the collapse of the system due to
unforeseen conditions.
12 | P a g e
Economic load dispatch (ELD) and economic emission dispatch (EED) have been applied to obtain
optimal fuel cost and optimal emission of generating units, respectively by using a Weighted Sum
method to Conclude a compromising point in between these two process.
13 | P a g e
Sulaiman et. al. [4], proposed a paper on economic load dispatch problem solution by using the
metaheuristic firefly algorithm technique. They used 26 bus systems to show the effectiveness of
firefly algorithm. Later on the FA result was compared with the continuous genetic algorithm and
lambda iteration method for the cost minimization showing that FA is more robust and consistent.
Amoli et al. [5] presented the application of firefly algorithm to solve economic load dispatch with
cubical and quadratic equation of fuel cost functions. Also they have compared the simulation
results with genetic algorithm, modified Particle Swarm Optimization (PSO), pattern search,
dynamic programming and improved genetic algorithm with multiplier updating techniques. The
simulation was carried out on 1400 MW load, and it was concluded that the FA works more
efficiently than other mentioned methods in terms of better optimal solution while fulfilling the
equality and inequality constraints.
Swarnkar [6] proposed the approach to solve economic load dispatch with reduced power losses
using firefly algorithm, the author also introduced the Biogeography-Based Optimization (BBO)
algorithm to solve the described problem. The effectiveness and the efficiency of the firefly
algorithm was later compared with GA, PSO, Artificial Bee Colony optimization (ABC), BBO
and Bacterial Foraging Algorithm (BFA) and other optimization techniques. BBO was found to
be more capable for obtaining better quality solutions with higher computational efficiency and
stable convergence characteristic.
Latifa et al [7], showed the efficiency and the feasibility of the firefly meet algorithm to resolve
economic load dispatch problem with pollutant emission reduction. The proposed method was
tested on IEEE 14 bus test system and on two thermal plant units. In one of the test cases the
transmission loss and the pollutant emission were neglected, finally the results were compared
with the PSO, to demonstrate the efficiency and the robustness of the firefly algorithm with less
average CPU time.
T Bouktira [8], presented a simple genetic algorithm solution to solve optimal power flow problem
of large distribution systems. The aim was to minimize the fuel cost and keep the power outputs
of generators, bus voltages, shunt capacitors/reactors and transformers tap-setting in their secure
limits. They can reduce CPU times by decomposing the optimization constraints to active
constraints manipulated directly by the genetic algorithm, and constraints maintained in their soft
limits using a conventional constraint load flow. The IEEE 30-bus system has been studied to show
the effectiveness of the algorithm.
Raji, Senior Member [9], IEEE gave an idea to use voltage angles at generator buses as a control
variable to get load bus voltages with less calculation. This approach in solving the optimal power
14 | P a g e
flow problem by genetic algorithms may be ineffective if starting values of voltage angles are
chosen randomly. To solve these difficulties, a procedure for selection of an initial set of complex
voltages at generator- buses was proposed in this paper. With this procedure, one can start the
optimization process (i.e., genetic algorithm) with a set of control variables, causing few or no
violation of constraints. The outcome was competitive with other methods and resulted in drastic
reduction of computational time.
Md Laouer [10] performed the calculation of optimal power flow for active and reactive power
using genetic algorithm which simplifies difficulties of convergence of the given problem. In this
case, the mathematical aspect is based on the analogy of the physical and biological processes and
it reduces the complexity, which is usually encountered, for the resolution of the optimisation
problems. This approach is more effective and powerful as the combination of the decoupled
method and GA was used for calculation of the optimal power distribution.
1.5 Objective:
1) Mathematical representation of a 14 and 30 bus system.
2) Calculation of the minimum value of Fuel Cost for 14 and 30 bus systems.
3) Calculation of the minimum value of Emission for 14 and 30 bus systems.
4) Obtaining a trade-off between Fuel cost and emission for 14 and 30 bus systems using
Pareto Optimal Curve and finding the compromising point using Genetic Algorithm.
1.6 INTRODUCTION
Today as the demand for power is increasing day by day Economic Load Dispatch plays an
important role to schedule power generator outputs with respect to the load demands and to operate
a power system economically, so as to minimize the operating cost of the power system. To solve
the economic load dispatch problem many techniques were proposed such as classical techniques,
linear programming (LP), nonlinear programming (NLP), Quadratic Programming (QP), swarm
optimization, evolutionary programming, tabu search, genetic algorithm, etc. In this work we have
used genetic algorithm (GA) technique to solve economic load dispatch problem for IEEE 14 &
30 Bus System. Genetic algorithm is a search algorithm based on the mechanics of natural selection
and natural genetics; it combines the survival of the fittest among structures with a structured yet
randomized information exchange to form a search algorithm with some of the innovative flair of
human search.
15 | P a g e
1.2 Genetic Algorithm
It generates a population of points at each iteration. The best point in the population approaches
an optimal solution. GA works with a coding of the parameter set not the parameter themselves.
GA use payoff (objective function) information, not derivatives or other auxiliary knowledge.
GA use probabilistic transition rules not deterministic rules. GA have been developed by John
Holland his colleagues and his students at the University of Michigan.
16 | P a g e
1.4.1 GENETIC ALGORITHM:
A genetic algorithm (GA) is a search technique used in computing to find exact or
approximate solutions to optimization and search problems. Genetic algorithms are
categorized as global search heuristics. Genetic algorithms are a particular class of
evolutionary algorithms that use techniques inspired by evolutionary biology such as
inheritance, mutation, selection, and crossover (also called recombination). Zalzala et
al.[8A] made the review on the current development techniques in genetic algorithm. It
explains theoretical aspects of genetic algorithms and genetic algorithm applications.
Theoretical topics under review include genetic algorithm techniques, genetic operator
technique, niching techniques, genetic drift, and method of benchmarking genetic
algorithm performances, measurement of difficulty level of a test-bed function, population
genetics and developmental mechanism in genetic algorithms. According to zalzala et al.
there were two types of genetic algorithm earlier one was Breeder genetic algorithm and
the other was simple genetic algorithm. Breeder Genetic Algorithm (BGA) is first
introduced by Miihleiibein and Schlierkamp-Voosen (1993). The major difference between
simple genetic algorithm and BGA is the method of selection. Generally, truncation
selection is used in BGA. Genetic drift is an important phenomenon in genetic algorithm
search. Once the algorithm is converged, the size of original gene pool is reduced to the
size of found solution(s) gene pool. This leads to genetic drift. Two niching techniques -
simple sub-population scheme and deterministic crowding are being reviewed. Many
traditional optimization algorithm suffer from myopia for highly complex search spaces
spaces, leading them to less than desirable performance (both in terms of execution speed
and fraction of time they need to find an optimal solution)[8B]. This paper helps us in
understanding application of genetic algorithm on multiple fault diagnosis problems. It is
seen that in a MFD problem the problem is that in many regions of the search space there
is little information to direct the search (e.g., in a flatvalley). Consequently, local search
17 | P a g e
algorithms may exhibit less than desirable performance. To handle irregular search spaces, such
heuristics should adopt a global strategy and rely heavily on intelligent randomization. Genetic
algorithms follow just such a strategy. Following the model of evolution, they establish a
population of individuals, where each individual corresponds to a point in the search space. An
objective function is applied to each individual to rate their fitness. Using well conceived
operators, a next generation is formed based upon the survival of the fittest. Therefore, the
evolution of individuals from generation to generation tends to result in fitter individuals,
solutions, in the search space.
18 | P a g e
economic load dispatch is proposed. New genetic operations of crossover and mutation are
introduced. On realizing the crossover operation, the offspring spreads over the domain so that a
higher chance of reaching the global optimum can be obtained. On realizing the mutation, each
gene will have a chance to change its value. Consequently, the search domain will become smaller
when the training iteration number increases in order to realize a fine-tuning process. Operating at
absolute minimum cost can no longer be the only criterion for dispatching electric power due to
increasing concern over the environmental considerations. The economic-emission load dispatch
(EELD) problem which accounts for minimization of both cost and emission is a multiple,
conflicting objective function problem. Goal programming techniques are most suitable for such
type of problems. In the paper Economic-Emission Load Dispatch through Goal Programming
Techniques, Nanda et al. [8] solved the economic-emission load dispatch problem through linear
and non-linear goal programming algorithms. The application and validity of the proposed
algorithms are demonstrated for a sample system having six generators. In reality, the ELLD
problem is a multiple objective problem with conflicting objectives because minimum pollution is
conflicting to minimum cost of generation. In the paper the EELD problem is solved for the first
time, to the best of the authors' knowledge, using a linear goal programming (LGP) technique. A
maiden attempt is made to solve this conflicting, Multiobjective problem with the use of LGP
technique as well as with NLGP technique. In our work many solutions have been obtained by GA
for IEEE 14 and 30 bus systems. The best compromise solution or the Target Point can be obtained
by Minimum Distance Method, Goal Programming, optimal weights, Surrogate worth Trade-off
Technique, Sequential Goal Programming etc.
Given a problem with h objectives, h-2 of them are set at predetermined values and one of the
remaining two objectives is minimized with the other objective constrained at varying levels (e.g.
if Z1 is to be minimized, Z2 is varied over some range of values, and Z3, Z4, ...Zh, are fixed at levels
L3, L4, ..., Lh). In other words, the original h-objective problem is reduced to a two-objective
problem. Another application where surrogate worth tradeoff technique has been applied is in
power system operation in electricity markets proposed by A.Berizzi et al.[7].His work is in regard
to the possible use of Multiobjective methodologies in order to improve the security and reliability
of power systems during the short term planning and the operation stage.
According to that the liberalization of electric energy markets has brought important changes in
the economic and technical aspects of power system planning and operation: the grids have to be
19 | P a g e
managed according to new economic principles but taking into account the old technical
constraints. Therefore, it is necessary to change the perspective during power system optimization,
and this requires an improvement in the methodologies and algorithms used. Steps mentioned by
him to solve a Multiobjective problem were as: a) define the objectives; b) fmd the Pareto set and
c) choose a solution from Pareto set.
1.5 Thesis Organization:
The material of this dissertation has been arranged in seven chapters. The contents of the chapters
are briefly outlined as indicated below:
Chapter 1: Discusses the introduction to Genetic Algorithm and Research objectives of the thesis.
Literature survey of the covered topics has also been presented.
Chapter 2: Presents the development of Genetic Algorithms and its applications.
Chapter 3: Explores the concepts of Genetic Algorithm in Matlab R2007b. Analysis of various
constrained and Multiobjective problems have been done and results have been presented
Chapter 4: Describes the economic load dispatch problem. Economic load dispatch problem
neglecting losses has also been done using GA and results have been presented.
Chapter 5: Presents the surrogate worth tradeoff technique.
Chapter 6: Presents the application of economic load dispatch problem for IEEE 5, 14 & 30 bus
systems. Results have been presented and Surrogate worth Tradeoff technique has been applied.
Chapter 7: Conclusion and Scope of Further Work.
References at the end of the thesis.
20 | P a g e
CHAPTER -2
INTRODUCTION TO GENETIC ALGORITHM
1|Page
6. Provides a list of optimum variables, not just a single solution.
7. Can encode the variables so that the optimization is done with the encoded
variables i.e. it can solve every optimization problem which can be described
with the chromosome encoding.
8. Works with numerically generated data, experimental data, or analytical
functions. Therefore, works on a wide range of problems. For each problem of
optimization in GAs, there are number of possible encodings. These advantages
are intriguing and produce stunning results where traditional optimization
approaches fail miserably. Due to various advantages as discussed above, GAs
are used for a number of different application areas.
In power system, the GAs has been used in following areas:
Loss reduction using Active Filter
Controllers
Optimal load dispatch
Voltage stability
Limitations of GA
In spite of its successful implementation, GA possesses some weaknesses leading to
1. Certain optimisation problems (they are called variant problems) cannot be
solved by means of genetic algorithms. This occurs due to poorly known fitness
functions which generate bad chromosome blocks in spite of the fact that only
good chromosome blocks cross-over.
2. There is no absolute assurance that a genetic algorithm will find a global optimum.
It happens very often when the populations have a lot of subjects.
3. Genetic algorithm applications in controls which are performed in real time are
limited because of random solutions and convergence, in other words this means
that the entire population is improving, but this could not be said for an individual
within this population. Therefore, it is unreasonable to use genetic algorithms for
on-line controls in real systems without testing them first on a simulation model.
4. One well-known problem that can occur with a GA is known as premature
convergence. If an individual that is more fit than most of its competitors emerges
early on in the course of the run, it may reproduce so abundantly that it drives down
the population's diversity too soon, leading the algorithm to converge on the local
2|Page
optimum that that individual represents rather than searching the fitness landscape
thoroughly enough to find the global optimum.
5. One type of problem that genetic algorithms have difficulty dealing with are
problems with "deceptive" fitness functions, those where the locations of improved
points give misleading information about where the global optimum is likely to be
found.
The process of GA follows this pattern:
1. An initial population of a random solution is created.
2. Each member of the population is assigned a fitness value based on its evaluation
against the current problem.
3. Solution with highest fitness value is most likely to parent new solutions during
reproduction.
4. The new solution set replaces the old, a generation is completed and the process
continues at step (2). Members of the population (chromosomes) are represented by a string
of genes. Each gene represents a design variable and is symbolized by a binary number.
Then, GA operators are used which are:
reproduction
crossover
mutation
GA allows a population composed of many individuals to evolve under specified selection
rules to a state that maximizes the ―fitness‖ (i.e., minimizes the cost function). The
different steps to perform GA are explained with the help of a simplified flowchart as
shown below in GA work iteratively sustaining a set (population) of representative
chromosomes of possible solutions to the problem domain at hand. As an optimization
method, they evaluate and manipulate these chromosomes using stochastic evolution rules
called Genetic Operators. During each iterative step, known as a generation, the
representative chromosomes in the current population are evaluated for their fitness as
optimal solutions. By comparing these fitness values, a new population of solution
chromosomes is created using the genetic operators, known as reproduction, crossover and
mutation. There are five components that are needed to implement a Genetic Algorithm.
3|Page
Fig 2.1 GA Flowchat
4|Page
2.2COMPONENTS OF GA
These five components listed below and are detailed in the following sections:
1. Representation
2. Initialization
3. Evaluation Function
4. Genetic Operators
5. Genetic Parameters
6. Termination
2.2.1 REPRESENTATION:
Genetic Algorithms are derived from a study of biological systems. In biological systems
evolution takes place on organic devices used to encode the structure of living beings.
These organic devices are known as chromosomes. A living being is only a decoded
structure of the chromosomes. Natural selection is the link between chromosomes and the
performance of their decoded structures. In GA, the design variables or features that
characterize an individual are represented in an ordered list called a string. Each design
variable corresponds to a gene and the string of genes corresponds to a chromosome.
Encoding:
The application of a genetic algorithm to a problem starts with the encoding. The encoding
specifies a mapping that transforms a possible solution to the problem into a structure
containing a collection of decision variables that are relevant to the problem. A particular
solution to the problem can then be represented by a specific assignment of values to the
decision variables. The set of all possible solutions is called the search space and a
particular solution represents a point in that search space. In practice, these structures can
be represented in various forms, including among others, strings, trees, and graphs. There
are also a variety of possible values that can be assigned to the decision variables, including
binary, k-array, and permutation values.
5|Page
Therefore, in order to implement GA for finding the solution of given optimization
problem, variables are first coded in some structure. The strings are coded by binary
representations having 0‘s and 1‘s. The string in GA corresponds to ―chromosomes‖ and
for power dispatch problems; firstly population of random numbers of 0‘s and 1‘s has been
generated. The length of each string in this study has been assumed as 16. The population
size has been taken as 20.
Decoding:
Decoding is the process of conversion of the binary structure of the chromosomes into
decimal equivalents of the feature values. Usually this process is done after de-catenation
of the entire chromosome to individual chromosomes. The decoded feature values are used
to compute the problem characteristics like the objective function, fitness values, constraint
violation and system statistical characteristics like variance, standard deviation and rate of
convergence. The stages of selection, crossover, mutation etc are repeated till some
termination condition is reached. There are several ways of selecting the termination
conditions, which can be either the convergence of the total objective function or the
satisfaction of the equality constraint or both. Since the genetic algorithm determines the
above features independently, the satisfaction of both the conditions has to be considered
for total absolute convergence. However, in situations of constraint violation, independent
satisfaction of the above conditions has to be considered and in the order of occurrence to
decide the feasibility of the solution.
String representation:
GA works on a population of strings consisting of a generation. A string consists of sub-
strings, each representing a problem variable. In the present ELD problem, the problem
variables correspond to the power generations of the units. Each string represents a possible
solution and is made of sub-strings, each corresponding to a generating unit. The length of
each sub-string is decided based on the maximum/minimum limits on the power generation
of the unit it represents and the solution accuracy desired. The string length, which depends
upon the length of each sub-string, is chosen based on a trade-off between solution
accuracy and solution time. Longer strings may provide better accuracy, but result in higher
solution time.
6|Page
2.2.2 INITIALIZATION
Initially many individual solutions are randomly generated to form an initial population.
The population size depends on the nature of the problem, but typically contains several
hundreds or thousands of possible solutions. Traditionally, the population is generated
randomly, covering the entire range of possible solutions (the search space). Occasionally,
the solutions may be "seeded" in areas where optimal solutions are likely to be found.
7|Page
F(X ) = f (X ) (3.1)
For minimization problems, the fitness function is an equivalent maximization problem
chosen such that the optimum point remains unchanged. The following fitness function is
often used in minimization problem
F(x) = 1/ ( f (X)) (3.2)
This information does not alter the location of the minimum, but converts a minimization
problem to an equivalent maximization problem. The fitness function value of a string is
known as the string‘s fitness. The operation of GAs begins with a population of random
strings representing design or decision variables. Thereafter, each string is evaluated to find
the fitness value. The population is then operated by three operators- reproduction,
crossover, and mutation to create a new population of points. The new population is further
evaluated and tested for termination. If the termination criteria is not met, the population
is iteratively operated by the above three operators and evaluated. This procedure is
continued until the termination criterion is met. One cycle of these operations and the
subsequent evaluation procedure is known as a generation in GA‘s terminology.
Implementation of power dispatch problem in GA is realized within the fitness function
written in eqn. (3.2)
Crossover:
The basic operator for producing new chromosome in the genetic algorithm is crossover.
In the crossover operator, information is exchanged among strings of the mating pool to
create new strings. In other words, crossover produces new individuals that have some
parts of both parent’s genetic materials. It consists of taking two individuals A and B and
randomly selecting a crossover point in each. The two individuals are then split at these
points. The choice of crossover point is not always uniform. It is expected from the
crossover operator that good substrings from the parent strings will be combined to form a
better child offspring. At the molecular level what occurs is that a pair of Chromosomes
bump into one another, exchange chunks of genetic information and drift apart. This is the
recombination operation; which GA generally refers to as crossover because of the way
that genetic material crosses over from one chromosome to another. The crossover
operation happens in an environment, where the selection of who gets to mate is a function
of the fitness of the individuals. How good the individual is at competing in its
environment. Some Genetic Algorithms use a simple function of the fitness measure to
select individuals (probabilistically) to undergo genetic operations such as crossover or
asexual reproduction (the propagation of genetic material unaltered). This is fitness-
proportionate selection. Other implementations use a model in which certain randomly
selected individuals in a subgroup compete and the fittest is selected. This is called
tournament selection and is the forms of selection we see in nature. The two processes that
most contribute to evolution are crossover and fitness based selection/reproduction. There
are three forms of crossover:
(1) one-point crossover
9|Page
(2) multipoint crossover
(3) uniform crossover.
Mutation:
Mutation also plays a role in this process, although how important its role is, depends upon
the conditions. It is also known as background operator. It plays dominant role in the
evolutionary process. It cannot be stressed too strongly that the genetic Algorithm is not a
random search for a solution to a problem for highly fit individual. It consists of randomly
selecting a mutation point. The genetic algorithm uses stochastic processes, but the result
is distinctly non-random. Genetic Algorithms are used for a number of different
applications areas. An example of this would be multidimensional optimization problems
in which the character string of the Chromosome can be used to encode the values for the
different parameters being optimized. Mutation is an important operator, as newly created
individuals have no new inheritance information, this process results in contraction of the
population at one single point, which is wished one. Mutation operator changes 1 to 0 at
only one place in the whole string with a small probability and vice versa. E.g. Child 1
101100
Let mutation is done at location 5 the new child will be New
child 101110
2.2.5 GENETIC PARAMETERS:
Genetic parameters are a means of manipulating the performance of a Genetic Algorithm.
There are many possible implementations of Genetic Algorithms involving variations such
as additional genetic operators, variable sized populations and so forth. Listed below are
some of the basic genetic parameters:
(i) Population Size (N)
(ii) Crossover rate (C)
(iii) Mutation rate (M)
(i). Population Size (N): Population size affects the efficiency and performance of the
algorithm. Using a small population size may result in a poor performance from the
algorithm. This is due to the process not covering the entire problem space. A larger
population on the other hand, would cover more space and prevent premature convergence
to local minima. At the same time, a large population needs more evaluations per
generation and may slow down the convergence rate.
10 | P a g e
(ii). Crossover rate (C): The crossover rate is the parameter that affects the rate at which
the process of crossover is applied. In each new population, the number of strings that
undergo the process of crossover can be depicted by a chosen probability. This probability
is known as the crossover rate. A higher crossover rate introduces new strings more quickly
into the population. If the crossover rate is too high, high performance strings are
eliminated faster than selection can produce improvements. A low crossover rate may
cause stagnations due to the lower exploration rate, and convergence problems may occur.
(iii). Mutation rate (M): Mutation rate is the probability with which each bit position of
each chromosome in the new population undergoes a random change after the selection
process. It is basically a secondary search operator which increases the diversity of the
population. A low mutation rate helps to prevent any bit position from getting trapped at a
single value, whereas a high mutation rate can result in essentially random search.
2.2.6 TERMINATION:
This generational process is repeated until a termination condition has been reached.
Common terminating conditions are:
1. A solution is found that satisfies minimum criteria
2. Fixed number of generations reached
3. Allocated budget (computation time/money) reached
4. The highest ranking solution's fitness is reaching or has reached a plateau such that
successive iterations no longer produce better results
5. Manual inspection
6. Combinations of the above.
The termination determines the convergence of the optimization process to achieve the
optimal solution. The convergence criterion is given in the equation. If the convergence
criterion is not achieved, the whole process will repeat.
Fitnessmax – Fitnessmin ≤ 0.0001 (3.5)
2.3 Some Applications of Genetic Algorithms:
(i) Pattern Recognition Applications
(ii) Robotics and Artificial life applications
(iii) Expert system applications
(iv) Electronic and Electrical applications
(v) Cellular Automata Applications
(vi) Applications in Biology and Medicine
11 | P a g e
CHAPTER – 3
EXPLORING THE TOOLS OF GENETIC
ALGORITHM IN MATLAB
12 | P a g e
Additional Output Arguments
To get more information about the performance of the genetic algorithm, We
can call ga with the syntax [x fval reason output population scores] =
ga(@fitnessfcn, nvars)
Besides the optimal values, and the objective function values it can return
the following as follows
Reason – Reason the algorithm is terminated.
Output – It gives the total no: of generations GA took to get the
optimal point.
Scores – This gives fitnessfunction values for the final population.
Population – It gives the population of the final generation.
This is demonstrated for the two variable function
[ x fval reason output population ] = ga(@twofunc ,2)
Optimization terminated: average change in the fitness value less
than options.TolFun.
x =3.0019 1.9974
fval =1.4564e-004
reason = 1
output =randstate: [625x1 uint32]
randnstate: [2x1 double]
generations: 64
funccount: 1300
message: 'Optimization terminated: average change in the fitness
value less than options.TolFun.'
problemtype: 'unconstrained'
population =
3.0019 1.9974
3.0019 1.9974
3.0972 1.4451
13 | P a g e
3.0019 1.9974
3.0019 1.9974
3.0019 1.9974
3.0972 2.2776
3.0019 1.9974
3.0019 1.5743
2.4790 1.7785
3.0019 1.9974
3.0019 1.9974
3.0019 1.9974
1.8300 2.6615
3.0019 1.9974
2.6747 1.6538
2.9707 1.8944
2.8774 2.3525
3.2334 3.0486
2.5524 2.0540
14 | P a g e
CrossoverFraction: 0.8000
MigrationDirection: 'forward'
MigrationInterval: 20
MigrationFraction: 0.2000
Generations: 100
TimeLimit: Inf
FitnessLimit: -Inf
StallGenLimit: 50
StallTimeLimit: 20
TolFun: 1.0000e-006
TolCon: 1.0000e-006
InitialPopulation: []
InitialScores: []
InitialPenalty: 10
PenaltyFactor: 100
PlotInterval: 1
CreationFcn: @gacreationuniform
FitnessScalingFcn: @fitscalingrank
SelectionFcn: @selectionstochunif
CrossoverFcn: @crossoverscattered
MutationFcn: @mutationgaussian
HybridFcn: []
Display: 'final'
PlotFcns: []
OutputFcns: []
Vectorized: 'off'
15 | P a g e
PopulationSize
ans = 20
To create an options structure with a field value that is different from the default; for
example to set PopulationSize to 100 instead of its default value 20 — enter
options = gaoptimset ('PopulationSize', 100)
This creates the options structure with all values set to their defaults except for
PopulationSize, which is set to 100. If you now
enter,ga(@fitnessfun,nvars,[],[],[],[],[],[],[],options)
GA runs the genetic algorithm with a population size of 100.
16 | P a g e
message: 'Optimization terminated: average change in the fitness value less than
options.TolFun.'
problemtype: 'unconstrained'
Then, reset the states, by entering
rand('state', output.randstate);
randn('state', output.randnstate);
If you now run ga a second time, you get the same results.
[ x fval ] = ga(@twofunc,2)
Optimization terminated: average change in the fitness value less than
options.TolFun.
x= 2.9973 1.9949
fval = 9.7874e-004
Hence, the results obtained were same as obtained earlier.
17 | P a g e
Resuming ga from the Final Population of a Previous Run - Demonstration [ x fval
reason output final_pop ] = ga(@twofunc,2)
Optimization terminated: average change in the fitness value less than
options.TolFun.
x =2.9978 2.0096
fval =0.0013
reason =1
output =randstate: [625x1 uint32]
randnstate: [2x1 double]
generations: 68
funccount: 1380
message: 'Optimization terminated: average change in the fitness value less than
options.TolFun.'
problemtype: 'unconstrained'
final_pop =2.9978 2.0096
2.9978 2.0096
2.9978 2.0096
2.9446 2.1310
2.9978 2.0096
2.9978 1.2363
2.9978 2.0096
2.9978 2.0096
2.9978 1.4147
2.9978 2.0096
2.9978 2.0180
2.9978 2.0096
3.5443 2.0096
2.9978 2.0096
2.9446 2.0096
3.3255 1.7993
2.9785 1.8566
2.6437 2.8720
18 | P a g e
2.9773 1.6690
2.8753 2.0506
Resuming ga from the Final Population of a Previous Run – Demonstration continued
options = gaoptimset('Initialpop',final_pop);
[x fval reason output final_pop2] = ga(@twofunc, 2,[],[],[],[],[],[],[],options)
Optimization terminated: average change in the fitness value less than options.TolFun.
x=
2.9978 2.0096
fval =
0.0013
So, the results obtained by setting the final population of the previous run is same as
obtained earlier.
19 | P a g e
In changing the options: changing population size from default 20 to 100; changing
number of generations from 100 to 200; changing stall generation limit from 50 to 100;
changing stall time limit from 20 to 100 On changing the above mentioned parameters
result obtained is:
>> options=gaoptimset('PopulationSize',100)
>> options=gaoptimset(options,'Generations',200)
>> options=gaoptimset(options,'StallGenLimit',100)
>> options=gaoptimset(options,'StallTimeLimit',100) options
=PopulationType: 'doubleVector'
PopInitRange: [2x1 double]
PopulationSize: 100
Based on the changes done following results have been obtained which are shown in
the tabulated form:
Options X(1) X(2) Fitness No. of
Value Generation
With default options 2.9919 2.0185 0.0053 51
Changing population size from 3.0011 2.0113 0.0025 51
20 to 100
Changing maximum number of 2.9966 1.9979 0.1197 51
generations from 100 to 200
Changing stall generation limit 2.9995 2.0035 0.034 101
from 50 to 100
Changing stall time limit from 2.9965 2.0014 0.071 101
20 to 100
Changing function tolerance 3.0003 2.0004 0.0457 101
from 1.00e-006 to 1.00e-004
It is found that after changing the above mentioned parameters the best fitness value
of the function has increased from 0.0053 to 0.0449 & total number of generations
taken by GA to obtain the final answer has also changed from 51 to 101.
It is seen that on changing the function tolerance there is not much change in the best
fitness value. It has changed from 0.0449 to 0.0457
20 | P a g e
3.7 CONSTRAINED MINIMIZATION USING GA
The GA function assumes the constraint function will take one input x, where x has as
many elements as the number of variables in the problem.
21 | P a g e
The constraint function computes the values of all the inequality and equality constraints
and returns two vectors, c and ceq, respectively.
To minimize the fitness function, you need to pass a function handle to the fitness function
as the first argument to the ga function, as well as specifying the number of variables as
the second argument. Lower and upper bounds are provided as LB and UB respectively. In
addition, you also need to pass a function handle to the nonlinear constraint function.
The syntax to implement the constraint minimization is as follows.
[x,fval] = ga(ObjectiveFunction, nvars,[],[],[],[],LB, UB,ConstraintFunction) Suppose you
want to minimize the simple fitness function of two variables x1 and x2,
min f(x) = 100*(x1^2 – x2)^2 + ( 1 – x1)^2.
subject to the following nonlinear inequality constraints and bounds
x1.x2 + x1 – x2 + 1.5 <= 0
10 – x1.x2 <= 0
0 <= x1 <= 1
0 <= x2 <= 13
First, create an M-file named simple_fitness.m as follows:
function y = simple_fitness(x)
y = 100 * (x(1)^2 - x(2)) ^2 + (1 - x(1))^2;
The genetic algorithm function, ga, assumes the fitness function will take one input x,
where x has as many elements as the number of variables in the problem. The fitness
function computes the value of the function and returns that scalar value in its one return
argument, y.
Create an M-file, simple_constraint.m, containing the constraints
function [c, ceq] = simple_constraint(x)
c = [1.5 + x(1)*x(2) + x(1) - x(2); -x(1)*x(2) + 10;x(1)-1;x(2)-13]; ceq = [];
For the constrained minimization problem, the ga function changed the mutation function
to @mutationadaptfeasible. The default mutation function, @mutationgaussian, is only
appropriate for unconstrained minimization problems.
Specify mutationadaptfeasible as the mutation function for the minimization problem by
using the gaoptimset function.
options = gaoptimset('MutationFcn',@mutationadaptfeasible);
ObjectiveFunction = @simple_fitness; nvars = 2; % Number of variables
LB = [0 0]; % Lower bound
22 | P a g e
UB = [1 13]; % Upper bound
ConstraintFunction = @simple_constraint;
Next run the ga solver.
[x,fval] = ga(ObjectiveFunction,nvars,[],[],[],[],LB, UB,ConstraintFunction, options)
Optimization terminated: current tolerance on f(x) 1e-007 is less than options.TolFun and
constraint violation is less than options.TolCon.
x= 0.8122 12.3122
fval = 1.3578e+004
3.8 PARAMETRIZING FUNCTIONS CALLED BY GA
Sometimes you might want to write functions that are called by ga that have additional
parameters to the independent variable. For example, suppose you want to minimize the
following function:
f(x) = ( a – bx1^2 + x1^4/3)*x1^2 + x1.x2 + ( -c + cx3^2)*x3^2
for different values of a, b, and c. Because ga accepts a fitness function that depends only
on x, you must provide the additional parameters a, b, and c to the function before calling
ga.
Parameterizing Functions Using Anonymous Functions with ga
To parameterize your function, first write an M-file containing the following code:
function y = parameterfun(x,a,b,c)
y = (a - b*x(1)^2 + x(1)^4/3)*x(1)^2 + x(1)*x(2)+(-c + c*x(3)^2)*x(3)^2); Save the M-
file as parameterfun.m in a directory on the MATLAB path.
Now, suppose you want to minimize the function for the parameter values a
= 4, b =2.1,and c = 4. To do so, define a function handle to an anonymous function by
entering the following commands at the MATLAB prompt:
>> a = 4; b = 2.1; c = 4; % Define parameter values fitfun =
@(x) parameterfun(x,a,b,c);
NVARS = 3;
>> [x fval] = ga(fitfun , 3)
Optimization terminated: average change in the fitness value less than
options.TolFun.
x = -0.1302 0.7170 0.2272
fval = -1.0254
23 | P a g e
3.9 GA TOOL
Gatool is one of the features available in the Matlab. It performs the same
functions as the ga from the command line. But the difference between them
is gatool is not operated in the command prompt.
Instead , once if we type the gatool at the command prompt, a new window
is opened , where we can adjust the options and we can get the optimal
points.
24 | P a g e
Basic Operation of GA tool
Write a simple M-file which computes the objective function value, and import it in the
gatool as follows @filename in the fitness function column. Specify the no:of variables in
the nvars column .
Then click the start button, you will get the output in the same window.
Possible Outputs to be Obtained in the GA tool
Linear inequalities of the form A*x = b are specified by the matrix A and the vector b.
Linear equalities of the form Aeq*x = beq are specified by the matrix Aeq and the vector
beq.
Bounds are lower and upper bounds on the variables.
Lower = specifies lower bounds as a vector.
Upper = specifies upper bounds as a vector.
Nonlinear constraint function defines the nonlinear constraints. Specify the function as an
anonymous function or as a function handle of the form @nonlcon, where nonlcon.m is an
M-file that returns the vectors c and ceq. The nonlinear equalities are of the form ceq = 0,
and the nonlinear inequalities are of the form c = 0.
25 | P a g e
3.11 PLOT FUNCTION IN GA TOOL
Plot functions enable you to plot various aspects of the genetic algorithm as it is executing.
Each one will draw in a separate axis on the display window. Use the Stop button on the
window to interrupt a running process.
Plot interval specifies the number of generations between successive updates of the plot.
Best fitness plots the best function value in each generation versus iteration number.
Score diversity plots a histogram of the scores at each generation.
Stopping plots stopping criteria levels.
Best individual plots the vector entries of the individual with the best fitness function
value in each generation.
Genealogy plots the genealogy of individuals. Lines from one generation to
the next are color-coded as follows:
Red lines indicate mutation children.
Blue lines indicate crossover children.
Black lines indicate elite individuals.
Max constraint plots the maximum nonlinear constraint violation.
Distance plots the average distance between individuals at each generation.
Range plots the minimum, maximum, and mean fitness function values in each generation.
Selection plots a histogram of the parents. This shows you which parents are
contributing to each generation.
Run Solver in GA tool
To run the solver, click Start under Run solver. When the algorithm terminates, the Status
and results pane displays the reason the algorithm terminated. The Final point updates to
show the coordinates of the final point.
Options in GA tool #
Populations
Fitness scaling
Selection
Reproduction
Mutation
Crossover
Stopping criteria
Output functions
26 | P a g e
Display to the command window
Vectorize
27 | P a g e
3.13 FITNESS SCALING OPTION IN GA TOOL
The scaling function converts raw fitness scores returned by the fitness function to values
in a range that is suitable for the selection function. Scaling function specifies the
function that performs the scaling. You can choose from the following function:
Rank scales the raw scores based on the rank of each individual, rather than its score. The
rank of an individual is its position in the sorted scores. The rank of the fittest individual is
1, the next fittest is 2, and so on. Rank fitness scaling removes the effect of the spread of
the raw scores.
28 | P a g e
default option in Mutation function field is Gaussian. Gaussian is normally used for
unconstrained problems. For constrained problems adapt feasible option is used.
29 | P a g e
Vectorize Option in GAtool
The vectorize option specifies whether the computation of the fitness function is
vectorized.
Set Objective function is vectorized to On to indicate that the fitness
function is vectorized.
When Objective function is vectorized is Off, the algorithm calls the fitness
function on one individual at a time as it loops through the population.
From the above table we can analyze the best compromise solution at different
weights. It is seen that best solution or maximum function value is obtained at weight
(100,100)
30 | P a g e
CHAPTER – 4
ECONOMIC LOAD DISPATCH IN POWER SYSTEM
4.1 PURPOSE OF ECONOMIC LOAD DISPATCH:
The purpose of economic dispatch or optimal dispatch is to reduce fuel costs for the power
system. Minimum fuel costs are achieved by the economic load scheduling of the different
generating units or plants in the power system. By economic load scheduling we mean to
find the generation of the different generators or plants so that the total fuel cost is
minimum, and at the same time the total demand and the losses at any instant must be met
by the total generation. In case of economic load dispatch the generations are not fixed but
they are allowed to take values again within certain limits so as to meet a particular load
demand with fuel consumption. This means economic load dispatch problem is really the
solution of large number of load flow problems and choosing the one which is optimum in
the sense that it needs minimum cost of generation.
Where Pi, Pj are the real power injections at the ith, jth buses.
Bij are loss coefficients which are constant under certain assumed conditions.
NG is number of generation buses.
The above constrained optimization problem is converted into an unconstrained one.
Lagrange multiplier method is used in which a function is minimized (or maximized)
subject to side conditions in the form ofequality conditions. Using Lagrange multipliers,
an augmented function is defined as
By differentiating the transmission loss equation with respect to Pi, the incremental
transmission loss can be obtained as
∂PL/∂Pi = ∑jNG 2BijPj (i = 1, 2 …, NG ) (4.2)
FnnPn + fn + λ ∑ 2BmnPm = λ
32 | P a g e
Collecting all coefficients of Pn, we obtain
Pn(Fnn + 2λBnn) = - λ(∑2BmnPm) – fn + λ
Pn = 1 – {(fn/λ)∑2BmnPm}
(Fnn/λ) + 2Bnn
But for this we did the minimization of the cost function given as:
To solve the economic load dispatch problem, we have formulated our problem in the
following manner:
Minimize F(x) = C1 + C2 + C3 + …. + Cn
Where n is the no. of generators & C1, C2…, Cn are the cost chracterstics
inequality constraint:
Pmin < Pi < Pmax
33 | P a g e
equality constraint:
PD + PL - ∑nNGPn = 0
& ∑nNG∑nNG PnBnjPj – Specified Loss = 0
Where PL = ∑nNG∑jNG PnBnjPj i=1, n=1 (Loss Formula using B-coefficient) PD represents
total load demand, PL represents losses, Bni represents B-coefficients.
Results are obtained for different values of specified or fixed loss. On the above
results Surrogate worth Trade-off Technique has been applied to obtain best
compromise solution or target point.
Subject to 𝑃𝐷 = ∑𝑁
𝑖=1 𝑃 𝑖 (4.4)
where FT is total fuel input to the system, Fi the fuel input to the nth unit, PD the total
load demand and Pi the generation of ith unit.
By making use of Lagrangian multiplier the auxiliary function is obtained as
𝐹 = 𝐹𝑇 + 𝜆(𝑃𝐷 − ∑𝑁
𝑖=1 𝑃𝑖 ) (4.5)
where λ is Lagrangian multiplier.
Differentiating F with respect to generation Pi and equating to zero gives the condition
for optimal operation of the system.
𝜕𝐹 𝜕𝐹𝑇 𝜕𝐹𝑇
= + 𝜆(0 − 1) = +𝜆 (4.6)
𝜕𝑃𝑖 𝜕𝑃𝑖 𝜕𝑃𝑖
𝑑𝐹
Here 𝑑𝑃𝑖 =incremental fuel cost of plant i in Rs per MWhr.
𝑖
34 | P a g e
The incremental production cost of a given plant over a limited range is represented
by:
𝑑𝐹𝑛
= 𝐹𝑛𝑛 𝑃𝑛 + 𝑓𝑛 (4.7)
𝑑𝑃𝑛
The equations (4.6) means that the machines be so loaded that the incremental cost of
production of each machine is same. It is to be noted here that the active power generation
constraints are taken into account while solving the equations which are derived above. If
these constraints are violated for any generator it is tied to the corresponding limit and the
rest of the load is distributed to the remaining generator units according to the equal
incremental cost of production. The simultaneous solution of equations (4.3) and (4.5)
gives the economic operating schedule.
I: Let us consider a problem
There are two generators of 100 MW each with incremental characteristics:
dF1 = 2 + 0.012 P1
dP1
Minimum load on each unit is 10MW; total load to be supplied is 150 MW.
35 | P a g e
CHAPTER-5
Introduction-
The Pareto frontier is the set of all Pareto efficient allocations, conventionally shown graphically.
It also is variously known as the Pareto front or Pareto set.
A Pareto improvement is a change to a different allocation that makes at least one individual or
preference criterion better off without making any other individual or preference criterion worse
off, given a certain initial allocation of goods among a set of individuals. An allocation is defined
as "Pareto efficient" or "Pareto optimal" when no further Pareto improvements can be made, in
which case we are assumed to have reached Pareto optimality.
"Pareto efficiency" is considered as a minimal notion of efficiency that does not necessarily result
in a socially desirable distribution of resources: it makes no statement about equality, or the overall
well-being of a society.
The notion of Pareto efficiency has been applied to the selection of alternatives in engineering and
similar fields. Each option is first assessed, under multiple criteria, and then a subset of options is
ostensibly identified with the property that no other option can categorically outperform any of its
members.
"Pareto optimality" is a formally defined concept used to determine when an allocation is optimal.
An allocation is not Pareto optimal if there is an alternative allocation where improvements can be
made to at least one participant's well-being without reducing any other participant's well-being.
If there is a transfer that satisfies this condition, the reallocation is called a "Pareto improvement."
When no further Pareto improvements are possible, the allocation is a "Pareto optimum."
36 | P a g e
The formal presentation of the concept in an economy is as follows: Consider an economy with I
agents and j goods. Then an allocation {x1……xi}, where xn ϵ Rj , is Pareto optimal if there is no
other feasible allocation {x'1……x'i} such that, for utility function ui for each agent i,un(x'n)≥
un(xn), for all nϵ{1,…i} with un(x'n), un(xn) for some n Here, in this simple economy, "feasibility"
refers to an allocation where the total amount of each good that is allocated sums to no more than
the total amount of the good in the economy. In a more complex economy with production, an
allocation would consist both of consumption vectors and production vectors, and feasibility would
require that the total amount of each consumed good is no greater than the initial endowment plus
the amount produced.
In principle, a change from a generally inefficient economic allocation to an efficient one is not
necessarily considered to be a Pareto improvement. Even when there are overall gains in the
economy, if a single agent is disadvantaged by the reallocation, the allocation is not Pareto optimal.
For instance, if a change in economic policy eliminates a monopoly and that market subsequently
becomes competitive, the gain to others may be large. However, since the monopolist is
disadvantaged, this is not a Pareto improvement. In theory, if the gains to the economy are larger
than the loss to the monopolist, the monopolist could be compensated for its loss while still leaving
a net gain for others in the economy, allowing for a Pareto improvement. Thus, in practice, to
ensure that nobody is disadvantaged by a change aimed at achieving Pareto efficiency,
compensation of one or more parties may be required. It is acknowledged, in the real world, that
such compensations may have unintended consequences leading to incentive distortions over time,
as agents supposedly anticipate such compensations and change their actions accordingly.
37 | P a g e
Under the idealized conditions of the first welfare theorem, a system of free markets, also called a
"competitive equilibrium," leads to a Pareto-efficient outcome. It was first demonstrated
mathematically by economists Kenneth Arrow and Gérard Debreu.
However, the result only holds under the restrictive assumptions necessary for the proof: markets
exist for all possible goods, so there are no externalities; all markets are in full equilibrium; markets
are perfectly competitive; transaction costs are negligible; and market participants have perfect
information.
In the absence of perfect information or complete markets, outcomes will generally be Pareto
inefficient, per the Greenwald-Stiglitz theorem.
The second welfare theorem is essentially the reverse of the first welfare-theorem. It states that
under similar, ideal assumptions, any Pareto optimum can be obtained by some competitive
equilibrium, or free market system, although it may also require a lump-sum transfer of wealth.[
38 | P a g e
5.2 WEIGHTED SUM METHOD
In decision theory, the weighted sum model (WSM) is the simplest multi-criteria decision
analysis (MCDA) / multi-criteria decision making method for evaluating a number of alternatives
in terms of a number of decision criteria. It is very important to state here that it is applicable only
when all the data are expressed in exactly the same unit. If this is not the case, then the final result
is equivalent to "adding apples and oranges."
In general, suppose that a given MCDA problem is defined on m alternatives and n decision
criteria. Furthermore, let us assume that all the criteria are benefit criteria, that is, the higher the
values are, the better it is. Next suppose that wj denotes the relative weight of importance of the
criterion Cj and aij is the performance value of alternative Ai when it is evaluated in terms of
criterion Cj. Then, the total (i.e., when all the criteria are considered simultaneously) importance
of alternative Ai, denoted as AiWSM-score, is defined.
39 | P a g e
CHAPTER-6
40 | P a g e
IEEE 14 BUS system
B-Coefficients are:
B = 0.0208 0.0090 -0.0021 0.0024 0.0006
0.0090 0.0168 -0.0028 0.0035 0.0000
-0.0021 -0.0028 0.0207 -0.0152 -0.0179
0.0024 0.0035 -0.0152 0.0763 -0.0103
0.0006 0.0000 -0.0179 -0.0103 0.0476
B0 = -0.0001 0.0023 -0.0012 0.0027 0.0011
B00 = 3.1826e-004
41 | P a g e
12 1.004 -12.011 6.100 1.600 0.000 0.000 0.000
13 0.998 -12.101 13.500 5.800 0.000 0.000 0.000
14 0.980 -13.123 14.900 5.000 0.000 0.000 0.000
P1 P2 P3 P4 P5
173.110 47.4434 21.0021 14.2895 11.4834
P1 P2 P3 P4 P5
53.199 71.558 71.745 28.48 36.68
FIG 6.2: Compairison between Fuelcost & Emission cost of a 14BUS System
43 | P a g e
Table Results of Solution of IEEE14 bus system for Cost Optimization
Slno P(1) P(2) P(3) P(4) P(5) Fuel Emission
cost cost
Traditional 173.317 47.767 21.039 13.5341 11.680 715.5 418.39
Lagrangian
Method
Genetic 173.110 47.443 21.002 14.289 11.433 715.2 406.77
Algorithm
44 | P a g e
Genetic 100.24 58.30 20.86 50.64 34.94 769.12 343.17
Algorithm
Compromise 114.28 50.25 39.63 48.66 12.00 765.06 345.43
Ramp
Max Min.
Gen Level γ β α a b c
No MW MW (MW/
Hr)
1 250 10 70 0.0126 -0.90 22.983 0.00375 2.0 0
2 140 20 28 0.0200 -0.10 25.313 0.01750 1.75 0
3 100 15 20 0.0270 -0.01 25.505 0.06250 1.0 0
4 120 10 44 0.0291 -0.005 24.900 0.00834 3.25 0
5 45 10 9 0.0290 -0.004 24.700 0.02500 3.0 0
45 | P a g e
6.2 IEEE 30 BUS SYSTEM
46 | P a g e
Bus Bus Voltage Angle Load Generator Injected
No code Mag. Degree MW Mvar MW Mvar Qmin Qmax Mvar
1 2 1.060 0.0 0.0 0.0 260.2 -16.1 0 0 0
2 1 1.043 0.0 21.70 12.7 40.0 50.0 - 40 50 0
3 0 1.021 0.0 2.4 1.2 0.0 0.0 0 0 0
4 0 1.012 0.0 7.6 1.6 0.0 0.0 0 0 0
5 1 1.010 0.0 94.2 19.0 0.0 37.0 -40 40 0
6 0 1.010 0.0 0.0 0.0 0.0 0.0 0 0 0
7 0 1.002 0.0 22.8 10.9 0.0 0.0 0 0 0
8 1 1.010 0.0 30.0 30.0 0.0 37.3 -10 40 0
9 0 1.051 0.0 0.0 0.0 0.0 0.0 0 0 0
10 0 1.045 0.0 5.8 2.0 0.0 0.0 0 0 0.19
11 1 1.082 0.0 0.0 0.0 0.0 16.2 -6 24 0
12 0 1.057 0 11.2 7.5 0 0 0 0 0
13 1 1.071 0 0 0.0 0 10.6 -6 24 0
14 0 1.042 0 6.2 1.6 0 0 0 0 0
15 0 1.038 0 8.2 2.5 0 0 0 0 0
16 0 1.045 0 3.5 1.8 0 0 0 0 0
17 0 1.040 0 9.0 5.8 0 0 0 0 0
18 0 1.028 0 3.2 0.9 0 0 0 0 0
19 0 1.026 0 9.5 3.4 0 0 0 0 0
20 0 1.030 0 2.2 0.7 0 0 0 0 0
21 0 1.033 0 17.5 1.2 0 0 0 0 0
22 0 1.033 0 0 0.0 0 0 0 0 0
23 0 1.027 0 3.2 1.6 0 0 0 0 0
24 0 1.021 0 8.7 6.7 0 0 0 0 0.043
25 0 1.017 0 0 0.0 0 0 0 0 0
47 | P a g e
26 0 1.000 0 3.5 2.3 0 0 0 0 0
27 0 1.023 0 0 0.0 0 0 0 0 0
28 0 1.007 0 0 0.0 0 0 0 0 0
29 0 1.003 0 2.4 0.9 0 0 0 0 0
30 0 0.992 0 10.6 1.9 0 0 0 0 0
OUTPUT
49 | P a g e
10 22 .0727 .1499 0 0
21 22 .0116 .0236 0 0
15 23 .1000 .2020 0 0
22 24 .1150 .1790 0 0
23 24 .1320 .2700 0 0
24 25 .1885 .3292 0 0
25 26 .2544 .3800 0 0
25 27 .1093 .2087 0 0
28 27 0 .3960 0 0.968
27 29 .2198 .4153 0 0
27 30 .3202 .6027 0 0
29 30 .2399 .4533 0 0
8 28 .0636 .2000 0.0428 0
6 28 .0169 .0599 0.0130 0
B00 = 0.0025
52 | P a g e
Table Results of ELD on IEEE 30 bus applying genetic algorithm
P1 P2 P3 P4 P5 P6 Cost Emission
53 | P a g e
Solution of combined economic emission problem using weighted sum
approach
In this method we convert the multi-objective optimization problem into
a single objective one by assigning weightage w to fuel cost and (1-w) to
emission. The single objective function v thus obtained is minimized. The
weights are varied from 0 to 1 and the corresponding values of the
objective functions are plotted to form the Pareto optimal curve.
v=w*FC+(1-w)*E
54 | P a g e
0.65 128.45 66.249 24.556 17.043 29.181 25.413 388.4529 795.6856
0.7 128.179 48.94 33.474 20.168 29.88 29.564 374.3148 798.2758
0.75 125.203 55.152 31.359 15.706 25.553 27.485 378.8295 796.1641
0.8 127.956 34.149 40.17 30.885 22.28 35.17 378.1489 821.8075
0.85 129.516 66.887 27.035 23.703 17.855 26.238 389.6632 803.1698
0.90 128.915 45.029 28.822 20.505 27.08 39.844 376.4575 789.9347
0.95 130.941 66.249 25.005 19.085 21.67 28.182 390.0440 297.3630
1.0 128.763 73.412 25.524 15.996 23.495 24.053 398.2670 804.5729
55 | P a g e
TABLE-7
Comparison of different methods
Method P1 P2 P3 P4 P5 P6 Fuel emissio
applied Cost n
Cost using 150.45 42.82 18.52 10.00 30.00 40.00 780.26 411.979
Lagrangian 62 99 12 00 00 00
Cost using 153.09 32.86 19.50 28.99 27.37 29.50 787.975 403.949
GA
Emission 152.64 51.30 19.58 28.99 27.37 29.50 845.443 431.960
using GA
comparsion 128.45 66.24 24.55 17.04 29.18 25.41 795.685 388.452
Ramp
Max Min.
Gen Level γ β α a b c
No MW MW (MW/
Hr)
1 200 50 50 0.0126 -0.90 22.983 0.00375 2.0 0
2 80 20 20 0.0200 -0.10 25.313 0.01750 1.7 0
3 50 15 13 0.0270 -0.01 25.505 0.06250 1.0 0
4 35 10 9 0.0291 -0.005 24.900 0.00834 3.25 0
5 30 10 8 0.0290 -0.004 24.700 0.02500 3.0 0
6 40 12 10 0.0271 -0.0055 25.300 0.02500 3.0 0
56 | P a g e
Chapter – 7
CONCLUSIONS & FUTURE SCOPE OF WORK
7.1 CONCLUSIONS
Based on the work carried out in this thesis following conclusion can be made:
1. In this work Genetic Algorithm has been studied and analyzed its parameters like
population size, Initial population, Initial Range, Stopping conditions etc in getting the
optimal points and final generation calculated for plotting the graphs. We had also noticed
that we are not been able to obtain the results of all the population after each generation or
iteration. We were only being able to get best fitness value after every generation.
2. Minimization of both constrained and unconstrained functions has been done using
Genetic Algorithm to find global optimum point. We have also performed minimization of
Multiobjective functions using GA for both constrained and unconstrained using the
Weighted Method technique.
3. We have used the above gathered knowledge in the formulation and implementation of
solution methods to obtain the optimum solution of Economic Load Dispatch problem
using Genetic Algorithm is carried out.
The effectiveness of the developed program is tested for IEEE 5, 14, & 30 BUS systems.
Surrogate Worth Tradeoff Technique has been applied on the obtained results to obtain the
Optimum or Ideal point.
57 | P a g e
References:
1. Brown SH Multiple linear regression analysis: a matrix approach with Matlab. Ala J Math
34:1–3 , (2009)
2. Cheng CT, Lin, JY, Sun YG, Chau K Long-term prediction of discharges in Manwan
Hydropower using adaptive-network-based fuzzy inference systems models. Adv Nat
Comput 1152-1161, (2005).
3. Dorofki M, Elshafie AH, Jaafar O, Karim OA, Mastura S Comparison of artificial neural
network transfer functions abilities to simulate extreme runoff data. 2012 International
conference on environment, energy and biotechnology, Duch W, Jankowski N (1999)
Survey of neural transfer functions, pp 39–44 ,(2012).
4. Haykin S (ed) Neural network and machine learning , Neural comput Surv 2:163–212,
(2009).
5. Nayak PC, Sudheer KP, Rangan DM, Ramasastri KS A neuro-fuzzy computing technique
for modeling hydrological time series Bull Math Biophys 5:115–133, (2004).
6. Tang H, Tan CK, Yi Z Neural networks: computational models and applications. Stud Com
Intell, vol 53. Springer, Berlin, (2007).
7. JR Birge, S Takriti, “A stochastic model for the unit commitment problem”, E Long -IEEE
Transactions on Power Systems, 1996.
8. Apostolopoulos T. and Valchos A., “Application of Firefly Algorithm for Solving the
Economic Emission Load Dispatch Problem”, International Journal of Combitronics,
volume. 23 pages,doi:10.1155/2011/523806, 2011.
9. Yang X. S., Hosseini S. S. Sadat, Gandomi A. H., “Firefly Algorithm for Solving
NonConvex Economic Dispatch Problem With Valve Point Loading Effect”, Applied Soft
Computing, vol. 12, issue 3, pp 1180-1186, March 2012.
10. T.NIKNAM ,H.DOAGOU-MOJARRAD ,Multiobjective economic/emission dispatch by
multiobjective Ɵ-particle swarm optimization,16th feb 2012.
58 | P a g e
APPENDIX
MATLAB programs
For B-Coefficient & load loss for 14 bus
clear
basemva=100;
accuracy=0.0001;
maxiter=10;
busdata=[1 1 1.06 0 0 0 0 0 0 0 0
2 2 1.045 0 21.7 12.7 63.11 0 -40 50 0
3 2 1.01 0 94.2 19 23.4 0 0 40 0
4 0 1 0 47.8 -3.9 0 0 0 0 0
5 0 1 0 7.6 1.6 0 0 0 0 0
6 2 1.07 0 11.2 7.5 12.2 0 -6 24 0
70100000000
8 2 1.09 0 0 0 17.4 0 -6 24 0
9 0 1 0 29.5 16.6 0 0 0 0 0
10 0 1 0 9 5.8 0 0 0 0 0
11 0 1 0 3.5 1.8 0 0 0 0 0
12 0 1 0 6.1 1.6 0 0 0 0 0
13 0 1 0 13.5 5.8 0 0 0 0 0
14 0 1 0 14.9 5 0 0 0 0 0];
linedata=[1 2 0.01938 0.05917 0.0264 1
1 5 0.05403 0.22304 0.0246 1
2 3 0.04699 0.19797 0.0219 1
2 4 0.05811 0.17632 0.0187 1
2 5 0.0595 0.17388 0.0170 1
3 4 0.06701 0.17103 0.0173 1
4 5 0.01335 0.04211 0.0064 1
4 7 0 0.20912 0 1
4 9 0 0.55618 0 1
5 6 0 0.25202 0 1
6 11 0.09498 0.19890 0 1
59 | P a g e
6 12 0.12291 0.25581 0 1
6 13 0.06615 0.13027 0 1
7 8 0 0.17615 0 1
7 9 0 0.11001 0 1
9 10 0.03181 0.08450 0 1
9 14 0.12711 0.27038 0 1
10 11 0.08205 0.19207 0 1
12 13 0.22092 0.19988 0 1
13 14 0.17093 0.34802 0 1];
cost=[0 2.0 0.00375
0 1.75 0.0175
0 1.0 0.0625
0 3.25 0.00834
0 3.0 0.025];
mwlimits=[10 250
20 140
15 100
10 120
10 45];
lfybus
lfnewton
busout
bloss
gencost
dispatch
while dpslack>.001 %Repeat till dpslack is within tolerance
lfnewton % New power flow solution
bloss % Loss coefficients are updated
dispatch % Optimum dispatch of gen. with new B-coefficients
end
busout % Prints the final power flow solution
gencost % Generation cost with optimum scheduling of gen.
60 | P a g e
For 14 Bus Load Dispatch Problem:
Objective Function M-file:
function z=optima14(p)
z=(0.00375*p(1)*p(1)+2.0*p(1)+0.0175*p(2)*p(2)+1.75*p(2)+0.0625*p(3)*p(3)+p(3)+0
.00834*p(4)*p(4)+3.25*p(4)+0.025*p(5)*p(5)+3*p(5));
Constraint Function M-File:
function [c,ceq]=constraintoptima14a(p)
c=[];
ceq=p(1)+p(2)+p(3)+p(4)+p(5)-259-((0.0208*p(1)*p(1)+2*0.009*p(1)*p(2)-
2*0.0021*p(1)*p(3)+2*0.0024*p(1)*p(4)+2*0.0006*p(1)*p(5)+0.0168*p(2)*p(2)-
2*0.0028*p(2)*p(3)+2*0.0035*p(2)*p(4)+0.0207*p(3)*p(3)-2*0.0152*p(3)*p(4)-
2*0.0179*p(3)*p(5)+0.0763*p(4)*p(4)-2*0.0103*p(4)*p(5)+0.0476*p(5)*p(5))/100-
0.0001*p(1)+0.0023*p(2)-0.0012*p(3)+0.0027*p(4)+0.0011*p(5)+3.1826e-4);
61 | P a g e
z=(0.00375*p(1)*p(1)+2.0*p(1)+0.0175*p(2)*p(2)+1.75*p(2)+0.0625*p(3)*p(3)+p(3)+0
.00834*p(4)*p(4)+3.25*p(4)+0.025*p(5)*p(5)+3*p(5));
y = (.0126 * p(1)*p(1) - .9*p(1)+22.983+.02*p(2)*p(2)-.1*p(2)+25.313+.027*p(3)*p(3)-
.01*p(3)+25.505+.0291*p(4)*p(4)-.005*p(4)+24.9+.029*p(5)*p(5)-.004*p(5)+24.7);
w=0.70;
v=(w*z)+((1-w)*y);
Constraint Function M-File:
function [c,ceq]=constraintoptima14a(p)
c=[];
ceq=p(1)+p(2)+p(3)+p(4)+p(5)-259-((0.0208*p(1)*p(1)+2*0.009*p(1)*p(2)-
2*0.0021*p(1)*p(3)+2*0.0024*p(1)*p(4)+2*0.0006*p(1)*p(5)+0.0168*p(2)*p(2)-
2*0.0028*p(2)*p(3)+2*0.0035*p(2)*p(4)+0.0207*p(3)*p(3)-2*0.0152*p(3)*p(4)-
2*0.0179*p(3)*p(5)+0.0763*p(4)*p(4)-2*0.0103*p(4)*p(5)+0.0476*p(5)*p(5))/100-
0.0001*p(1)+0.0023*p(2)-0.0012*p(3)+0.0027*p(4)+0.0011*p(5)+3.1826e-4);
for k=1:nbus
if kb(k)== 0
% I(k) = conj(S(k))/conj(V(k));
% else, ngg=ngg+1; I(k)=0; end
else, ngg=ngg+1; end
if kb(k)==1 ks=k; else, end
end
%ID= sum(I);
d1=I/ID;
DD=sum(d1.*Zbus(ks,:)); %new
kg=0; kd=0;
for k=1:nbus
if kb(k)~=0
kg=kg+1;
66 | P a g e
t1(kg) = Zbus(ks,k)/DD; %new
else, kd=kd+1;
d(kd)=I(k)/ID;
end
end
nd=nbus-ngg;
C1g=zeros(nbus, ngg);
kg=0;
for k=1:nbus
if kb(k)~=0
kg=kg+1;
for m=1:ngg
if kb(m)~=0
C1g(k, kg)=1;
else, end
end
else,end
end
C1gg=eye(ngg,ngg);
C1D=zeros(ngg,1);
C1=[C1g,conj(d1)'];
C2gD=[C1gg; -t1];
CnD=[C1D;-t1(1)];
C2=[C2gD,CnD];
C=C1*C2;
kg=0;
for k=1:nbus
if kb(k)~=0
kg=kg+1;
al(kg)=(1-j*((Qg(k)+Qsh(k))/Pg(k)))/conj(V(k)); %new
else,end
end
alp=[al, -V(ks)/Zbus(ks,ks)];
67 | P a g e
for k=1:ngg+1
for m=1:ngg+1
if k==m
alph(k,k)=alp(k);
else, alph(k,m)=0;end
end,end
T = alph*conj(C)'*real(Zbus)*conj(C)*conj(alph);
BB=0.5*(T+conj(T));
for k=1:ngg
for m=1:ngg
B(k,m)=BB(k,m);
end
B0(k)=2*BB(ngg+1,k);
end
B00=BB(ngg+1,ngg+1);
B, B0, B00
PL = Pgg*(B/basemva)*Pgg'+B0*Pgg'+B00*basemva;
fprintf('Total system loss = %g MW \n', PL)
clear I BB C C1 C1D C1g C1gg C2 C2gD CnD DD ID T al alp alph t1 d d1 kd kg ks nd
ng
68 | P a g e
for n=1:nbus
fprintf(' %5g', n), fprintf(' %7.3f', Vm(n)),
fprintf(' %8.3f', deltad(n)), fprintf(' %9.3f', Pd(n)),
fprintf(' %9.3f', Qd(n)), fprintf(' %9.3f', Pg(n)),
fprintf(' %9.3f ', Qg(n)), fprintf(' %8.3f\n', Qsh(n))
end
fprintf(' \n'), fprintf(' Total ')
fprintf(' %9.3f', Pdt), fprintf(' %9.3f', Qdt),
fprintf(' %9.3f', Pgt), fprintf(' %9.3f', Qgt), fprintf(' %9.3f\n\n',
Qsht)
69 | P a g e
B00=0;
else, end
if exist('basemva')~=1
basemva=100;
else, end
clear Pgg
Bu=B/basemva; B00u=basemva*B00;
alpha=cost(:,1); beta=cost(:,2); gama = cost(:,3);
Pmin=mwlimits(:,1); Pmax=mwlimits(:,2);
wgt=ones(1, ngg);
if Pdt > sum(Pmax)
Error1 = ['Total demand is greater than the total sum of maximum generation.'
'No feasible solution. Reduce demand or correct generator limits.'];
disp(Error1), return
elseif Pdt < sum(Pmin)
Error2 = ['Total demand is less than the total sum of minimum generation. '
'No feasible solution. Increase demand or correct generator limits.'];
disp(Error2), return
else, end
iterp = 0; % Iteration counter
DelP = 10; % Error in DelP is set to a high value
E=Bu;
if exist('lambda')~=1
lambda=max(beta);
end
while abs(DelP) >= 0.0001 & iterp < 200 % Test for convergence
iterp = iterp + 1; % No. of iterations
for k=1:ngg
if wgt(k) == 1
E(k,k) = gama(k)/lambda + Bu(k,k);
Dx(k) = 1/2*(1 - B0(k)- beta(k)/lambda);
else, E(k,k)=1; Dx(k) = 0;
70 | P a g e
for m=1:ngg
if m~=k
E(k,m)=0;
else,end
end
end
end
PP=E\Dx';
for k=1:ngg
if wgt(k)==1
Pgg(k) = PP(k);
else,end
end
Pgtt = sum(Pgg);
PL=Pgg*Bu*Pgg'+B0*Pgg'+B00u;
DelP =Pdt+PL -Pgtt ; %Residual
for k = 1:ngg
if Pgg(k) > Pmax(k) & abs(DelP) <=0.001,
Pgg(k) = Pmax(k); wgt(k) = 0;
elseif Pgg(k) < Pmin(k) & abs(DelP) <= 0.001
Pgg(k) = Pmin(k); wgt(k) = 0;
else, end
end
PL=Pgg*Bu*Pgg'+B0*Pgg'+B00u;
DelP =Pdt +PL - sum(Pgg); %Residual
for k=1:ngg
BP = 0;
for m=1:ngg
if m~=k
BP = BP + Bu(k,m)*Pgg(m);
else, end
end
71 | P a g e
grad(k)=(gama(k)*(1-B0(k))+Bu(k,k)*beta(k)-
2*gama(k)*BP)/(2*(gama(k)+lambda*Bu(k,k))^2);
end
sumgrad=wgt*grad';
Delambda = DelP/sumgrad; % Change in variable
lambda = lambda + Delambda; % Successive solution
end
fprintf('Incremental cost of delivered power (system lambda) = %9.6f $/MWh \n',
lambda);
fprintf('Optimal Dispatch of Generation:\n\n')
disp(Pgg')
%fprintf('Total system loss = %g MW \n\n', PL)
ng=length(Pgg);
n=0;
if exist('nbus')==1 | exist('busdata')==1
for k=1:nbus
if kb(k)~=0
n=n+1;
if n <= ng
busdata(k,7)=Pgg(n); else, end
else , end
end
if n == ng
for k=1:nbus
if kb(k)==1
dpslack = abs(Pg(k)-busdata(k,7))/basemva;
fprintf('Absolute value of the slack bus real power mismatch, dpslack = %8.4f pu \n',
dpslack)
else, end
end
else, end
else, end
clear BP Dx DelP Delambda E PP grad sumgrad wgt Bu B00u B B0 B00
72 | P a g e
For Bus out matlab:
% This program computes the total generation cost. It requires the
% real power generation schedule and the cost matrix.
%function [totalcost]=gencost(Pgg, cost)
if exist('Pgg')~=1
Pgg=input('Enter the scheduled real power gen. in row matrix ');
else,end
if exist('cost')~=1
cost = input('Enter the cost function matrix ');
else, end
ngg = length(cost(:,1));
Pmt = [ones(1,ngg); Pgg; Pgg.^2];
for i = 1:ngg
costv(i) = cost(i,:)*Pmt(:,i);
end
totalcost=sum(costv);
fprintf('\nTotal generation cost = % 10.2f $/h \n', totalcost)
while exist('accel')~=1
accel = 1.3;
end
while exist('accuracy')~=1
accuracy = 0.001;
end
while exist('basemva')~=1
basemva= 100;
end
while exist('maxiter')~=1
maxiter = 100;
end
iter=0;
maxerror=10;
while maxerror >= accuracy & iter <= maxiter
iter=iter+1;
for n = 1:nbus;
YV = 0+j*0;
for L = 1:nbr;
if nl(L) == n, k=nr(L);
YV = YV + Ybus(n,k)*V(k);
elseif nr(L) == n, k=nl(L);
YV = YV + Ybus(n,k)*V(k);
end
end
74 | P a g e
Sc = conj(V(n))*(Ybus(n,n)*V(n) + YV) ;
Sc = conj(Sc);
DP(n) = P(n) - real(Sc);
DQ(n) = Q(n) - imag(Sc);
if kb(n) == 1
S(n) =Sc; P(n) = real(Sc); Q(n) = imag(Sc); DP(n) =0; DQ(n)=0;
Vc(n) = V(n);
elseif kb(n) == 2
Q(n) = imag(Sc); S(n) = P(n) + j*Q(n);
if Qmax(n) ~= 0
Qgc = Q(n)*basemva + Qd(n) - Qsh(n);
if abs(DQ(n)) <= .005 & iter >= 10 % After 10 iterations
if DV(n) <= 0.045 % the Mvar of generator buses are
if Qgc < Qmin(n), % tested. If not within limits Vm(n)
Vm(n) = Vm(n) + 0.005; % is changed in steps of 0.005 pu
DV(n) = DV(n)+.005; % up to .05 pu in order to bring
elseif Qgc > Qmax(n), % the generator Mvar within the
Vm(n) = Vm(n) - 0.005; % specified limits.
DV(n)=DV(n)+.005; end
else, end
else,end
else,end
end
if kb(n) ~= 1
Vc(n) = (conj(S(n))/conj(V(n)) - YV )/ Ybus(n,n);
else, end
if kb(n) == 0
V(n) = V(n) + accel*(Vc(n)-V(n));
elseif kb(n) == 2
VcI = imag(Vc(n));
VcR = sqrt(Vm(n)^2 - VcI^2);
Vc(n) = VcR + j*VcI;
75 | P a g e
V(n) = V(n) + accel*(Vc(n) -V(n));
end
end
maxerror=max( max(abs(real(DP))), max(abs(imag(DQ))) );
if iter == maxiter & maxerror > accuracy
fprintf('\nWARNING: Iterative solution did not converged after ')
fprintf('%g', iter), fprintf(' iterations.\n\n')
fprintf('Press Enter to terminate the iterations and print the results \n')
converge = 0; pause, else, end
end
if converge ~= 1
tech= (' ITERATIVE SOLUTION DID NOT CONVERGE'); else,
tech=(' Power Flow Solution by Gauss-Seidel Method');
end
k=0;
for n = 1:nbus
Vm(n) = abs(V(n)); deltad(n) = angle(V(n))*180/pi;
if kb(n) == 1
S(n)=P(n)+j*Q(n);
Pg(n) = P(n)*basemva + Pd(n);
Qg(n) = Q(n)*basemva + Qd(n) - Qsh(n);
k=k+1;
Pgg(k)=Pg(n);
elseif kb(n) ==2
k=k+1;
Pgg(k)=Pg(n);
S(n)=P(n)+j*Q(n);
Qg(n) = Q(n)*basemva + Qd(n) - Qsh(n);
end
yload(n) = (Pd(n)- j*Qd(n)+j*Qsh(n))/(basemva*Vm(n)^2);
end
Pgt = sum(Pg); Qgt = sum(Qg); Pdt = sum(Pd); Qdt = sum(Qd); Qsht = sum(Qsh);
76 | P a g e
busdata(:,3)=Vm'; busdata(:,4)=deltad';
clear AcurBus DP DQ DV L Sc Vc VcI VcR YV converge delta
77 | P a g e
clear A DC J DX
while maxerror >= accuracy & iter <= maxiter % Test for max. power mismatch
for i=1:m
for k=1:m
A(i,k)=0; %Initializing Jacobian matrix
end, end
iter = iter+1;
for n=1:nbus
nn=n-nss(n);
lm=nbus+n-ngs(n)-nss(n)-ns;
J11=0; J22=0; J33=0; J44=0;
for i=1:nbr
if nl(i) == n | nr(i) == n
if nl(i) == n, l = nr(i); end
if nr(i) == n, l = nl(i); end
J11=J11+ Vm(n)*Vm(l)*Ym(n,l)*sin(t(n,l)- delta(n) + delta(l));
J33=J33+ Vm(n)*Vm(l)*Ym(n,l)*cos(t(n,l)- delta(n) + delta(l));
if kb(n)~=1
J22=J22+ Vm(l)*Ym(n,l)*cos(t(n,l)- delta(n) + delta(l));
J44=J44+ Vm(l)*Ym(n,l)*sin(t(n,l)- delta(n) + delta(l));
else, end
if kb(n) ~= 1 & kb(l) ~=1
lk = nbus+l-ngs(l)-nss(l)-ns;
ll = l -nss(l);
% off diagonalelements of J1
A(nn, ll) =-Vm(n)*Vm(l)*Ym(n,l)*sin(t(n,l)- delta(n) + delta(l));
if kb(l) == 0 % off diagonal elements of J2
A(nn, lk) =Vm(n)*Ym(n,l)*cos(t(n,l)- delta(n) + delta(l));end
if kb(n) == 0 % off diagonal elements of J3
A(lm, ll) =-Vm(n)*Vm(l)*Ym(n,l)*cos(t(n,l)- delta(n)+delta(l)); end
if kb(n) == 0 & kb(l) == 0 % off diagonal elements of J4
A(lm, lk) =-Vm(n)*Ym(n,l)*sin(t(n,l)- delta(n) + delta(l));end
else end
78 | P a g e
else , end
end
Pk = Vm(n)^2*Ym(n,n)*cos(t(n,n))+J33;
Qk = -Vm(n)^2*Ym(n,n)*sin(t(n,n))-J11;
if kb(n) == 1 P(n)=Pk; Q(n) = Qk; end % Swing bus P
if kb(n) == 2 Q(n)=Qk;
if Qmax(n) ~= 0
Qgc = Q(n)*basemva + Qd(n) - Qsh(n);
if iter <= 7 % Between the 2th & 6th iterations
if iter > 2 % the Mvar of generator buses are
if Qgc < Qmin(n), % tested. If not within limits Vm(n)
Vm(n) = Vm(n) + 0.01; % is changed in steps of 0.01 pu to
elseif Qgc > Qmax(n), % bring the generator Mvar within
Vm(n) = Vm(n) - 0.01;end % the specified limits.
else, end
else,end
else,end
end
if kb(n) ~= 1
A(nn,nn) = J11; %diagonal elements of J1
DC(nn) = P(n)-Pk;
end
if kb(n) == 0
A(nn,lm) = 2*Vm(n)*Ym(n,n)*cos(t(n,n))+J22; %diagonal elements of J2
A(lm,nn)= J33; %diagonal elements of J3
A(lm,lm) =-2*Vm(n)*Ym(n,n)*sin(t(n,n))-J44; %diagonal of elements of J4
DC(lm) = Q(n)-Qk;
end
end
DX=A\DC';
for n=1:nbus
nn=n-nss(n);
lm=nbus+n-ngs(n)-nss(n)-ns;
79 | P a g e
if kb(n) ~= 1
delta(n) = delta(n)+DX(nn); end
if kb(n) == 0
Vm(n)=Vm(n)+DX(lm); end
end
maxerror=max(abs(DC));
if iter == maxiter & maxerror > accuracy
fprintf('\nWARNING: Iterative solution did not converged after ')
fprintf('%g', iter), fprintf(' iterations.\n\n')
fprintf('Press Enter to terminate the iterations and print the results \n')
converge = 0; pause, else, end
end
if converge ~= 1
tech= (' ITERATIVE SOLUTION DID NOT CONVERGE'); else,
tech=(' Power Flow Solution by Newton-Raphson Method');
end
V = Vm.*cos(delta)+j*Vm.*sin(delta);
deltad=180/pi*delta;
i=sqrt(-1);
k=0;
for n = 1:nbus
if kb(n) == 1
k=k+1;
S(n)= P(n)+j*Q(n);
Pg(n) = P(n)*basemva + Pd(n);
Qg(n) = Q(n)*basemva + Qd(n) - Qsh(n);
Pgg(k)=Pg(n);
Qgg(k)=Qg(n); %june 97
elseif kb(n) ==2
k=k+1;
S(n)=P(n)+j*Q(n);
Qg(n) = Q(n)*basemva + Qd(n) - Qsh(n);
Pgg(k)=Pg(n);
80 | P a g e
Qgg(k)=Qg(n); % June 1997
end
yload(n) = (Pd(n)- j*Qd(n)+j*Qsh(n))/(basemva*Vm(n)^2);
end
busdata(:,3)=Vm'; busdata(:,4)=deltad';
Pgt = sum(Pg); Qgt = sum(Qg); Pdt = sum(Pd); Qdt = sum(Qd); Qsht = sum(Qsh);
For bloss:
% This program obtains th Bus Admittance Matrix for power flow solution
% Copyright (c) 1998 by H. Saadat
j=sqrt(-1); i = sqrt(-1);
nl = linedata(:,1); nr = linedata(:,2); R = linedata(:,3);
X = linedata(:,4); Bc = j*linedata(:,5); a = linedata(:, 6);
nbr=length(linedata(:,1)); nbus = max(max(nl), max(nr));
Z = R + j*X; y= ones(nbr,1)./Z; %branch admittance
for n = 1:nbr
if a(n) <= 0 a(n) = 1; else end
Ybus=zeros(nbus,nbus); % initialize Ybus to zero
% formation of the off diagonal elements
for k=1:nbr;
Ybus(nl(k),nr(k))=Ybus(nl(k),nr(k))-y(k)/a(k);
Ybus(nr(k),nl(k))=Ybus(nl(k),nr(k));
end
end
% formation of the diagonal elements
for n=1:nbus
for k=1:nbr
if nl(k)==n
Ybus(n,n) = Ybus(n,n)+y(k)/(a(k)^2) + Bc(k);
81 | P a g e
elseif nr(k)==n
Ybus(n,n) = Ybus(n,n)+y(k) +Bc(k);
else, end
end
end
clear Pgg
82 | P a g e