SP-Automatic Generation of Descriptive Comments for Code Blocks
SP-Automatic Generation of Descriptive Comments for Code Blocks
5229
Table 1: Ten Active Projects on Github
Project Description # of bytes # of Java Files # of Methods # Methods Commented
Activiti a light-weight workflow and Business Process Management (BPM) Platform 168M 2939 15875 1080
aima-java Java implementation of algorithms from “Artificial Intelligence - A Modern Approach” 182M 889 4078 1130
neo4j the worlds leading Graph Database. 270M 4125 24529 1197
cocos2d cocos2d for android, based on cocos2d-android-0.82 78M 512 3677 1182
rhino a Java implementation of JavaScript. 21M 352 4610 1195
spring-batch a framework for writing offline and batch applications using Spring and Java 56M 1742 7936 1827
Smack an open source, highly modular, easy to use, XMPP client library written in Java 41M 1335 5034 2344
guava Java-based projects: collections, caching, primitives 80M 1710 20321 3079
jersey a REST framework that provides JAX-RS Reference Implementation and more. 73M 2743 14374 2540
libgdx a cross-platform Java game development framework based on OpenGL (ES) 989M 1906 18889 2828
A comment here refers to the description at the beginning of a method, with more than eight words.
5230
ƌŽŽƚ
/Ĩ^ƚĂƚĞŵĞŶƚ /Ĩ^ƚĂƚĞŵĞŶƚ
if (!found){
ŽŶĚŝƚŝŽŶ ůŽƐĞWĂƌĞŶdŽŬĞŶ dŚĞŶ^ƚŵƚ
allFound = /ĨdŽŬĞŶ KƉĞŶWĂƌĞŶdŽŬĞŶ
false;
EĂŵĞdžƉƌ ůŽĐŬ^ƚŵƚ
} ŝĨ ; Ϳ
ůŽĐŬĞŐŝŶdŽŬĞŶ ZĞƚƵƌŶ^ƚŵƚ ůŽĐŬŶĚdŽŬĞŶ
if (allFound){ ŽŵďŝŶĞEĂŵĞ
return true;
} ŽŽůĞĂŶ>ŝƚĞƌĂůdžƉƌ
Ăůů ĨŽƵŶĚ ZĞƚƵƌŶdŽŬĞŶ
source code ƌĞƚƵƌŶ ƚƌƵĞ
Code-RNN
Figure 3: Code-RNN Example
longer forms in the context of the identifier in the code. are parameters for softmax function and will be tuned during
Specifically, we compare the identifier with the word list training. We use AdaGrad (Duchi, Hazan, and Singer 2011)
generated from the context of the identifier to see whether to apply unique learning rate to each parameter.
the identifier’s name is a substring of some word from the
list, or is the combination of the initial of the words in the 2.2 Comment Generation
list. If the list contains only one word, we just check if the Existing work (Elman 1990; Sutskever, Martens, and Hin-
identifier is part of that word. If so, we conclude that the ton 2011; Mikolov et al. 2010) has used Recurrent Neural
identifier is the abbreviation of that word with higher prob- Network to generate sentences. However, one challenge to
ability. If the list contains multiple words, we can collect all utilize the code block representation vector in Recurrent NN
the initials of the words in the list to see whether the iden- is that we can not feed the code block representation vector
tifier is part of this collection. Suppose the code fragment to the Recurrent NN cell directly. We thus propose a vari-
is ation of the GRU based RNN. Fig. 4 shows our comment
Matrix dm = new DoubleMatrix(confusionMatrix); generation process.
We search for the original words of “dm” as follows. Since
“dm” is not the substring of any word in the context, we col- Ā āĀ ā
lect the initials of the contextual words in a list: “m” “dm” ݐݕ
and “cm”. Therefore, “dm” is an abbreviation of “Dou-
bleMatrix”. ܹ݄
ܹ݄݄
Table 2: Example of Split Identifiers
Identifier Words
ܹ݄݅ ܹ݄ ݔ
contextInitialize context, initialize
apiSettings api, settings
ݐݔ
buildDataDictionary build, data, dictionary
add result add, result Ā āĀ ā
5231
by comparing with several state-of-the-art approaches in
both quantitative and qualitative assessments. The source
zt = σ(Wz · [ht−1 , xt ]) (4) code of our approach as well as all data set is available at
rt = σ(Wr · [ht−1 , xt ]) (5) https://adapt.seiee.sjtu.edu.cn/CodeComment/.
ct = σ(Wc · [ht−1 , xt ]) (6)
h˜t= tanh(W · [rt ∗ ht−1 , ct ∗ Vm , xt ]) (7) 3.1 Source Code Classification
ht = (1 − zt ) ∗ ht−1 + zt ∗ h˜t (8) Data Set The goal is to classify a given Java method (we
only use the body block without name and parameters) into
yt = sof tmax(Woh ht + bo ) (9)
a predefined set of classes depending on its functionality.
where Vm is the code block representation vector, ht−1 is Our data set comes from the Google Code Jam contest
the previous state and xt is the input word of this step. (2008∼2016), which there are multiple problems, each as-
To better use the code block vectors, our model differs sociated with a number of correct solutions contributed by
from existing RNNs, particularly in the definition of ct in programmers. 3 Each solution is a Java method. The set of
the Equation 6 and 7. The new RNN cell, illustrated in Fig. solutions for the same problem are considered to function
5, aims to strengthen the effect of code block vectors. This identically and belong to the same class in this work. We
modified GRU is hereinafter called Code-GRU. Code block use the solutions (10,724 methods) of 6 problems as training
vector contains all information of code block but not all in- set and the solutions (30 methods) of the other 6 problems
formation is useful at all steps. Therefore, we add a new gate as the test set. Notice that the problems in the training data
called choose gate to determine which dimension of code and the ones in the test data do not overlap. We specifically
block vector would work in Code-GRU. In Fig 5, the left design the data set this way because, many methods for the
gate is the choose gate, and the other two gates are the same same problem tend to use the same or similar set of identi-
as the original GRU. fiers, which is not true in real world application. The details
of training set and test set are shown in Table 4.
ࢂ ࢂ
X ܐܖ܉ܜ X Table 4: Data Sets for Source Code Clustering
࣌ ࣌ X ࣌
Problem Year # of methods
ࢎ࢚ି
X + ࢎ࢚
Cookie Clicker Alpha 2014 1639
Counting Sheep 2016 1722
Magic Trick 2014 2234
Training Set
Revenge of the Pancakes 2016 1214
࢚࢞ Speaking in Tongues 2012 1689
Standing Ovation 2015 2226
Figure 5: Structure of Code-GRU All Your Base 2009 5
Consonants 2013 5
During test time, we input the “START” token at first and Dijkstra 2015 5
Test Set
GoroSort 2011 5
choose the most probable word as the output. Then from
Osmos 2013 5
the second step the input words of every step are the output Part Elf 2014 5
words of previous one step until the output is “END” token.
So that we can get an automatically generated comment for
code blocks in our model.
To gain better results, we also apply the beam search Baselines We compare Code-RNN with two baseline ap-
while testing. We adopt a variant of beam search with a proaches. The first one is called language embedding (LE)
length penalty described in (Wu et al. 2016). In this beam and only treats the source code as a sequence of words, mi-
search model, there are two parameters: beam size and nus the special symbols (e.g., “$”, “(”, “+”, · · · ). All con-
weight for the length penalty. We tune these two parame- catenated words are preprocessed into primitive words as
ters on the validation set to determine which values to use. previously discussed. Then the whole code can be repre-
Our tuning ranges are: sented by either the sum (LES) or the average (LEA) of word
vectors of this sequence, trained in this model.This approach
• beam size: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] basically focuses on the word semantics only and ignores the
• weight for the length penalty: [0, 0.1, 0.2, 0.3, 0.4, 0.5, structural information from the source code.
0.6, 0.7, 0.8, 0.9, 1.0] The second baseline is a variant of Code-RNN, which
preprocesses the code parse tree by consistently replacing
3 Evaluation the identifier names with placeholders before computing the
Our evaluation comes in two parts. In the first part, we eval- overall representation of the tree. This variant focuses on the
uate Code-RNN model’s ability to classify different source structural properties only and ignores the word semantics.
code blocks into k known categories. In the second part, we
3
show the effectiveness of our comment generation model All solutions are available at http://www.go-hero.net/jam/16.
5232
Result of Classification At test time, when a method is generate sentences. The original data set for CODE-NN
classified into a class label, we need to determine which test are StackOverFlow thread title and code snippet pairs. 5 .
problem this class label refers to. To that end, we compute In this experiment, we use the comment-code pair data in
the accuracy of classification for all possible class label as- place of the title-snippet data.
signment and use the highest accuracy as the one given by a • We apply the sequence-to-sequence (seq2seq) model used
model. in machine translation (Britz et al. 2017) and treat the
Table 5 shows the purity of the produced classes, the F1 code as a sequence of words and the comment as another
and accuracy of the 6-class classification problem by differ- sequence.
ent methods. It is clear that Code-RNN (avg) perform better
uniformly than the baselines that use only word semantics • A. Karpathy and L. Fei-Fei (Karpathy and Fei-Fei 2015)
or only structural information. Therefore, in the rest of this proposed a meaningful method to generate image de-
section, we will use Code-RNN(avg) model to create vector scriptions. It also used Recurrent NN and representation
representation for a given method to be used for comment vector, so we apply this method to comment generation
generation. The F1 score for each individual problem is also model. The main equations are:
included in Table 6. bv = Whi Vm (10)
ht = f (Whx xt + Whh ht−1 + bh + bv ) (11)
Table 5: Purity, Average F1 and Accuracy yt = sof tmax(Woh ht + bo ) (12)
Purity F1 Accuracy
where Whi , Whx , Whh , Woh , xi and bh , bo are parameters
LEA 0.400 0.3515 0.3667
LES 0.3667 0.2846 0.3667
to be learned, and Vm is the method vector. We call this
CRA(ni) 0.4667 0.4167 0.4667
model Basic RNN.
CRS(ni) 0.4667 0.4187 0.4667 Moses and CODE-NN has its own terminate condition.
CRA 0.533 0.4774 0.5 Seq2Seq, Basic RNN and our model run 800 epochs during
CRS 0.4667 0.3945 0.4333
training time. For one project, we separate the commented
LEA = Language Embedding Average model; LES = Language Embedding Sum methods into three parts: training set, validation set and test
model; CRA = Code-RNN Average model; CRS = Code-RNN Sum model; (ni) = set. We tune the hyper parameter on the validation set. The
no identifier.
results of ten repositories are shown in Table 7.
Evaluation Metric We evaluate the quality of comment
generation by the Rouge method(Lin 2004). Rouge model
Table 6: F1 scores of individual problems counts the number of overlapping units between generated
Dijkstra Part Elf All Your Base GoroSort Consonants Osmos sentence and target sentence. We choose Rouge-2 score in
LEA
LES
0.25
0.33
0.33
0
0.43
0.53
0.33
0
0.4
0.53
0.36
0.31
this paper where word based 2-grams are used as the unit,
CRA(ni) 0.6 0 0.44 0.40 0.56 0.5 as it is the most commonly used in evaluating automatic text
CRS(ni) 0.62 0.29 0.67 0.5 0.44 0 generation such as summarization.
CRA 0.67 0 0.6 0.57 0.53 0.52
CRS 0.73 0 0.44 0.55 0.4 0.25
5233
Gold if the lock state matches the given stamp performs one of the fol-
Project: jersey lowing actions if the stamp represents holding a write lock re-
public long tryConvertToReadLock(long stamp) { leases it and UNK a read lock or if a read lock returns it or if
long a = stamp & ABITS, m, s, next; WNode h; an optimistic read acquires a read lock and returns a read stamp
while (((s = state) & SBITS) == (stamp & SBITS)) { only if immediately available this method returns zero in all other
if ((m = s & ABITS) == 0L) { cases
if (a != 0L) break; MOSES if the lock state matches the given if the lock state matches the
else if (m < RFULL) { given gets of processing sbits state matches the given sbits string
if (U.compareAndSwapLong(this, STATE, s, of the lock hold abits l l break component of rfull that runs sets
next = s + RUNIT)) of processing of runit or create a new pattern if this inc reader
return next;} overflow l create a human readable description of component of
else if ((next = tryIncReaderOverflow(s)) != wbit break if the lock of processing of wbit runit h whead by the
0L) return next;} given status release the given action if the sum associated with
else if (m == WBIT) { the given component l lock state matches the given action wbit
if (a != m) break; get returns break l
state = next = s + (WBIT + RUNIT); CODE-NN returns code true if the lock is not a link org glassfish jersey server
if ((h = whead) != null && h.status != 0) mvc
release(h); Seq2Seq UNK a new item to the list of superclass timeout version
return next;} Basic RNN* get a UNK to a link javax ws rs core UNK
else if (a != 0L && a < WBIT) return stamp; Code-GRU* if the lock state matches the given stamp performs one of the fol-
else break;} lowing actions if the stamp represents holding a write lock returns
return 0L;} it or if a read lock if the write lock is available releases the read
lock and returns a write stamp or if an optimistic read returns
project: libgdx Gold creates an int buffer based on a newly allocated int array
public static IntBuffer allocate (int capacity) { MOSES based on the creates a new backing buffer
if (capacity < 0) { CODE-NN creates a byte buffer based on a newly allocated char array
throw new IllegalArgumentException(); Seq2Seq creates a float buffer based on a newly allocated float array
} Basic RNN* creates a char buffer based on a newly allocated char array
return BufferFactory.newIntBuffer(capacity);} Code-GRU* creates a long buffer based on a newly allocated long array
In the first example, we can see that CODE-NN, Seq2Seq of the value, that is, Basic RNN changes “int” to “char”
and Basic RNN’s results are poor and have almost nothing while Code-GRU changes to “long”. “long” and “int” are
to do with the Gold comment. Even though both MOSES both numerical types while “char” is not. Thus Code-GRU is
produces a sequence of words that look similar to the Gold better than Basic RNN. For the result of Seq2Seq, although
in the beginning, the rest of the result is less readable and “float” is also a numerical type, it is for real numbers, and
does not have any useful information. For example, “if the not integers.
lock state matches the given” is output repeatedly. MOSES
also produces strange terms such as “wbit” and “runit” just 4 Related Work
because they appeared in the source code. In the contrast,
Code-GRU’s result is more readable and meaningful. Mining of source code repositories becomes increasingly
popular in recent years. Existing work in source code min-
In the second example, there is not any useful word in ing include code search, clone detection, software evolution,
the method body so the results of MOSES, CODE-NN and models of software development processes, bug localization,
Seq2Seq are bad. Code-RNN can extract the structural in- software bug prediction, code summarization and so on. Our
formation of source code and embed it into a vector, so both work can be categorized as code summarization and com-
models that use this vector, namely Basic RNN and Code- ment generation.
GRU, can generate the relevant comments. Sridhara et al. (Sridhara et al. 2010) proposed an au-
In the third example, although all results change the type tomatic comment generator that identifies the content for
5234
the summary and generates natural language text that sum- can also be used for clone detection. Williams and
marizes the methods overall actions based on some tem- Hollingsworth (Williams and Hollingsworth 2005) de-
plates. Moreno et al. (Moreno et al. 2013) also proposed a scribed a method to use the source code change history of
template based method but it is used on summarizing Java a software project to drive and help to refine the search
classes. McBurney and McMillan (McBurney and McMil- for bugs. Adhiselvam et al. (Adhiselvam, Kirubakaran, and
lan 2014) presented a novel approach for automatically gen- Sukumar 2015) used MRTBA algorithm to localize bug to
erating summaries of Java methods that summarize the con- help programmers debug. The method proposed in this paper
text surrounding a method, rather than details from the can also benefit natural language search for code fragments.
internals of the method. These summarization techniques
(Murphy 1996; Sridhara, Pollock, and Vijay-Shanker 2011; 5 Conclusion
Moreno et al. 2013; Haiduc et al. 2010) work by select-
In this paper we introduce a new Recursive Neural Net-
ing a subset of the statements and keywords from the code,
work called Code-RNN to extract the topic or function of
and then including information from those statements and
the source code. This new Recursive Neural Network is the
keywords in the summary. To improve them, Rodeghero et
parse tree of the source code and we go through all the tree
al. (Rodeghero et al. 2014) presented an eye-tracking study
from leaf nodes to root node to get the final representation
of programmers during source code summarization, a tool
vector. Then we use this vector to classify the source code
for selecting keywords based on the findings of the eye-
into some classes according to the function, and classifica-
tracking study.
tion results are acceptable. We further propose a new kind
These models are invariably based on templates and care-
of GRU called Code-GRU to utilize the vector created from
ful selection of fragments of the input source code. In con-
Code-RNN to generate comments. We apply Code-GRU to
trast, our model is based on learning and neural network.
ten source code repositories and gain the best result in most
There are also some models that apply learning methods to
projects. This frame work can also be applied to other pro-
mine source code.
gramming languages as long as we have access to the parse
Movshovitz-Attias and Cohen (Movshovitz-Attias and tree of the input program.
Cohen 2013) predicted comments using topic models and
As future work, we can add call graphs into our model,
n-grams. Like source code summarization, Allamanis et
so that Code-RNN can contain invocation information and
al. (Allamanis et al. 2015) proposed a continuous embed-
extract more topics from source code.
ding model to suggest accurate method and class names.
Iyer et al. (Iyer et al. 2016) proposed a new model called
CODE-NN that uses Long Short Term Memory (LSTM) Acknowledgement
networks with attention to produce sentences that can de- This work was supported by Oracle-SJTU Joint Re-
scribe C# code snippets and SQL queries. Iyer et al.’s work search Scheme, NSFC Grant No. 9164620571421002 and
has strong performance on two tasks, code summarization 61373031, and SJTU funding project 16JCCS08. Hongfei
and code retrieval. This work is very similar to our work, in Hu contributed to the identifier semantics part of the work.
that we both use the Recurrent NN to generate sentences
for source code. What differs is that we propose a new References
type of Recurrent NN. Adrian et al. (Kuhn, Ducasse, and Adhiselvam, A.; Kirubakaran, E.; and Sukumar, R. 2015.
Gı́rba 2007) utilized the information of identifier names and An enhanced approach for software bug localization using
comments to mine topic of source code repositories. Punya- map reduce technique based apriori (mrtba) algorithm. In-
murthula (Punyamurthula 2015) used call graphs to extract dian Journal of Science and Technology 8(35).
the metadata and dependency information from the source
code and used this information to analyze the source code Allamanis, M.; Barr, E. T.; Bird, C.; and Sutton, C. 2015.
and get its topics. Suggesting accurate method and class names. In ESEC/FSE,
In other related domains of source code mining, code 38–49. ACM.
search is a popular research direction. Most search en- Blei, D. M.; Ng, A. Y.; and Jordan, M. I. 2003. Latent
gines solve the problem by keyword extraction and signa- dirichlet allocation. Journal of machine Learning research
ture matching. Maarek et al. (Maarek, Berry, and Kaiser 3(Jan):993–1022.
1991) used keywords extracted from man pages written in Britz, D.; Goldie, A.; Luong, T.; and Le, Q. 2017. Mas-
natural language and their work is an early example of ap- sive Exploration of Neural Machine Translation Architec-
proaches based on keywords. Rollins and Wing (Rollins and tures. ArXiv e-prints.
Wing 1991) proposed an approach to find code with the sig-
Cai, B. 2016. Code parallelization through sequential code
natures present in code. Mitchell (Mitchell 2008) combined
search. In ICSE-C, 695–697. ACM.
signature matching with keyword matching. Then Garcia et
al. (Garcia-Contreras, Morales, and Hermenegildo 2016) fo- Cho, K.; Van Merriënboer, B.; Bahdanau, D.; and Ben-
cused on querying for semantic characteristics of code and gio, Y. 2014. On the properties of neural machine
proposed a new approach which combines semantic charac- translation: Encoder-decoder approaches. arXiv preprint
teristics and keyword matching. arXiv:1409.1259.
Cai (Cai 2016) proposed a method for code paral- Duchi, J.; Hazan, E.; and Singer, Y. 2011. Adaptive subgra-
lelization through sequential code search. That method dient methods for online learning and stochastic optimiza-
5235
tion. Journal of Machine Learning Research 12(Jul):2121– Rodeghero, P.; McMillan, C.; McBurney, P. W.; Bosch, N.;
2159. and D’Mello, S. 2014. Improving automated source code
Elman, J. L. 1990. Finding structure in time. Cognitive summarization via an eye-tracking study of programmers.
science 14(2):179–211. In ICSE, 390–401. ACM.
Garcia-Contreras, I.; Morales, J. F.; and Hermenegildo, Rollins, E. J., and Wing, J. M. 1991. Specifications as search
M. V. 2016. Semantic code browsing. arXiv preprint keys for software libraries. In ICLP, 173–187. Citeseer.
arXiv:1608.02565. Socher, R.; Lin, C. C.; Manning, C.; and Ng, A. Y. 2011a.
Haiduc, S.; Aponte, J.; Moreno, L.; and Marcus, A. 2010. Parsing natural scenes and natural language with recursive
On the use of automated text summarization techniques for neural networks. In ICML, 129–136.
summarizing source code. In WCRE, 35–44. IEEE. Socher, R.; Pennington, J.; Huang, E. H.; Ng, A. Y.; and
Iyer, S.; Konstas, I.; Cheung, A.; and Zettlemoyer, L. 2016. Manning, C. D. 2011b. Semi-supervised recursive autoen-
Summarizing source code using a neural attention model. In coders for predicting sentiment distributions. In EMNLP,
ACL, 2073–2083. 151–161. Association for Computational Linguistics.
Karpathy, A., and Fei-Fei, L. 2015. Deep visual-semantic Sridhara, G.; Hill, E.; Muppaneni, D.; Pollock, L.; and
alignments for generating image descriptions. In Proceed- Vijay-Shanker, K. 2010. Towards automatically generat-
ings of the IEEE Conference on Computer Vision and Pat- ing summary comments for java methods. In ASE, 43–52.
tern Recognition, 3128–3137. ACM.
Kuhn, A.; Ducasse, S.; and Gı́rba, T. 2007. Semantic clus- Sridhara, G.; Pollock, L.; and Vijay-Shanker, K. 2011. Gen-
tering: Identifying topics in source code. Information and erating parameter comments and integrating with method
Software Technology 49(3):230–243. summaries. In ICPC, 71–80. IEEE.
Lin, C.-Y. 2004. Rouge: A package for automatic evalu- Sutskever, I.; Martens, J.; and Hinton, G. E. 2011. Gener-
ation of summaries. In Text summarization branches out: ating text with recurrent neural networks. In ICML, 1017–
Proceedings of the ACL-04 workshop, volume 8. Barcelona, 1024.
Spain. Williams, C. C., and Hollingsworth, J. K. 2005. Automatic
Maarek, Y. S.; Berry, D. M.; and Kaiser, G. E. 1991. An in- mining of source code repositories to improve bug finding
formation retrieval approach for automatically constructing techniques. IEEE Transactions on Software Engineering
software libraries. IEEE Transactions on software Engineer- 31(6):466–480.
ing 17(8):800–813. Wu, Y.; Schuster, M.; Chen, Z.; Le, Q. V.; Norouzi, M.;
McBurney, P. W., and McMillan, C. 2014. Automatic doc- Macherey, W.; Krikun, M.; Cao, Y.; Gao, Q.; Macherey, K.;
umentation generation via source code summarization of et al. 2016. Google’s neural machine translation system:
method context. In ICPC, 279–290. ACM. Bridging the gap between human and machine translation.
arXiv preprint arXiv:1609.08144.
Mikolov, T.; Karafiát, M.; Burget, L.; Cernockỳ, J.; and Khu-
danpur, S. 2010. Recurrent neural network based language
model. In Interspeech, volume 2, 3.
Mikolov, T.; Sutskever, I.; Chen, K.; Corrado, G. S.; and
Dean, J. 2013. Distributed representations of words and
phrases and their compositionality. In Advances in neural
information processing systems, 3111–3119.
Mitchell, N. 2008. Hoogle overview. The Monad. Reader
12:27–35.
Moreno, L.; Aponte, J.; Sridhara, G.; Marcus, A.; Pollock,
L.; and Vijay-Shanker, K. 2013. Automatic generation of
natural language summaries for java classes. In ICPC, 23–
32. IEEE.
Movshovitz-Attias, D., and Cohen, W. W. 2013. Natural
language models for predicting programming comments.
Murphy, G. C. 1996. Lightweight structural summarization
as an aid to software evolution. Ph.D. Dissertation.
Punyamurthula, S. 2015. Dynamic model generation and
semantic search for open source projects using big data an-
alytics. Ph.D. Dissertation, Faculty of the University Of
Missouri-Kansas City in partial fulfillment Of the require-
ments for the degree MASTER OF SCIENCE By SRAVANI
PUNYAMURTHULA B. Tech, Jawaharlal Nehru Techno-
logical University.
5236