Testing
Testing
Fig. 1. Simulink Model of Power System Fig. 2. Forward and Backward Propagation in a neural network
The model is a 4-bus 50 Hz system. Two of the buses,
bus 1 and 4, are generator buses and the other two are load While forward propagation, the following calculations are
buses. Buses 1 & 2, 2 & 3, 3 & 4 are connected via 11 kV made in each layer:
transmission lines of length 100 km. Static loads of 10, 15,
and 5 MW have been added to buses 2, 3, and 4 respectively. Zi = Wi × Ai−1 + Bi (1)
Fault simulators have been added to each transmission line Ai = σ(Zi) (2)
for simulating LG, LL, LLG and LLL type faults. The fault
resistance is varied to 8 different values of 0.25, 0.5, 5, 10, Here, if we have ‘m’ number of training samples and hi
25, 50, 75, and 100 ohms. Generator specification and trans- is the number of neurons in ith layer, then both Zi and Ai
mission line details are given in Table I and II respectively. have the same shape of (hi × m). Wi is the weight matrix of
The transmission line parameters have been taken from IEEE ith layer having a shape×(hi hi−1) and Bi is the bias vector
9-bus model [13]. of that layer having a shape × (hi 1). σ() denotes the
activation function used in the layer. In backward
TABLE I propagation, the output is compared with true expected output
GENERATOR DETAILS and the produced error is minimized by adjusting the weights
and biases. Depending on the type of problem, different loss
Bus Bus Power R L
Name functions are used. The categorical cross-entropy loss (Eqn.
No Type Generation (ohm) (mH)
G1 (11kV) B1 Swing — 0.8929 16.58 3) is for classification tasks where true labels are one-hot
G2 (11kV) B4 PV 20 MW 0.8929 16.58 encoded with probability 1 and 0.
Σn
Lk = − ykj × log(yˆkj ) (3)
j=1
TABLE II
TRANSMISSION LINE DETAILS where, Lk is the loss for k th sample, yˆkj is the predicted prob-
th
ability of j class of that sample, ykj is the true probability of that class and ‘n’ is the no of classes. Mean Absolute Error
Specifications Values
(MAE) (Eqn.Zero
4) loss is for regression problems,
Sequence Resistance
where yˆk and
0.11241 ohms/km
Zero Sequence Inductance 3.53 mH/km yk are the predicted and true values of the kth sample.
Zero Sequence Capacitance 6.15nF/km m
Positive Sequence Resistance 0.044965 ohms/km 1 Σ
Positive Sequence Inductance 1.414 mH/km MAE = y — yˆ | (4)
k
Positive Sequence Capacitance 10.47 nF/km |
m k=1
k
B. Proposed Algorithm
III. FAULT IDENTIFICATION
We have constructed two DNNs separately, one for fault
A. Deep Neural Network detection and classification and another for fault location iden-
A Deep Neural Network is a machine learning architecture tification. The number of hidden layers, neurons and
that draws inspiration from biological neural cells of human activation functions has been chosen to generate optimum
brains [2]. It has three layers: Input Layer, Hidden Layer, Out- performance on the data set. Both models have 6 neurons
put Layer. The input layer accepts inputs in several different in input layer. The classification model network consists of 3
formats. The hidden layers calculate and find hidden features hidden layers having 60-100-50 neurons (Fig. 3) respectively
with activation
functions (relu-relu-relu). Output layer has 5 neurons with
acti- vation function – softmax. The model compilation has
‘Adam’ optimizer and ‘categorical cross entropy loss’
function. The fault location model has 4 hidden layers
having 60-100-80- 80 neurons (Fig. 4) with activation
functions (relu-tanh-relu- relu). Output layer has single neuron
for regression without activation function. Model compilation
has ‘Adam’ optimizer and ‘mean absolute error’ loss function.
Batch Normalization layers are used between hidden layers to Fig. 5. Training performance of classification model
maintain mean value of data close to 0 and standard
deviation close to 1.
Fig. 4. Layer structure for Location model The model was simulated for 1000 epochs and after the 848 th
iteration the optimum accuracy was obtained. Fig. 5 and Fig.
6 illustrate the training performance plot and the confusion
C. Dataset Generation matrix respectively. The classification model identifies the
Our generated dataset contains two types of target labels. correct type of faults for all the test samples except in 5 cases
One is the distance of the fault location and the other is the for LL and LLG faults. 100% accuracy is obtained for LG,
type of fault. The fault type classes are LG, LL, LLG, LLL LLL and no fault case. The sum of the diagonal percentages
and no fault (labeled as None) and the classes are one-hot confirms the accuracy on the test samples to be 98.7745%.
encoded. The fault distance in a transmission line is measured The minimum cross-entropy loss is found to be 0.0351. Since
from it’s corresponding bus. no fault condition is set as an individual class, the model
We have taken RMS values of the three-phase voltages equally performs well for detecting if there is any fault and
(V , V , V ) and currents (I , I , I ) as the input features. For the type
a b c a b c of fault when it occurs. The performance comparison with the
transmission lines 1, 2, and 3, input voltages and currents other related works are summerized in the table III.
are taken from bus 1, 2 and 3 respectively. The sampling For the location detection problem, the no-fault conditions
frequency is set to 2000Hz which is a suitable rate for the are removed from the whole data set and it is split into 3
power system. In a single run, we took data of one electrical parts, each part corresponding to a particular transmission
cycle resulting in 40 samples for each phase of voltages and line. For each transmission line data, 85% of total samples
currents, and then calculated the RMS value of the samples.
In each transmission line, faults are introduced at 17 different
places, each 5 km apart. At each location, 4 types of faults TABLE III
are introduced along with a no-fault condition. For each type COMPARISON OF RECENT WORK ON FAULT CLASSIFICATION