Students Placement Prediction System
Students Placement Prediction System
https://doi.org/10.22214/ijraset.2022.47448
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue XI Nov 2022- Available at www.ijraset.com
Abstract: Placement of students is one in every of the vital activities in academic establishments. Admission and name of
establishments primarily depends on placements. Hence all institutions strive to strengthen placement department. The main
objective of this paper is to analyze previous year’s student’s historical data and predict placement possibilities of current
students and aids to increase the placement percentage of the institutions. We are not going to consider the placement of
students not only by their academic performances but also aptitude, technical and communication skills, and segregating total
student placement data into students placed in different streams to identify in which stream placements are more and use that
data to predict the next year admission trends. Here we use different machine learning classification algorithms, namely
KNearest Neighbors [KNN] algorithm, AdaBoost, Random Forest. These algorithms independently predict the results and we
then compare the efficiency of the algorithms, which is based on the dataset. This model helps the position cell at intervals a
corporation to spot the potential students and concentrate to and improve their technical and social skills.
Keywords: Student Placement Prediction, AdaBoost, KNN Algorithm, Random forest.
I. INTRODUCTION
We aim to develop a placement predictor as a part of making a placement management system at college level which predicts the
probability of students getting placed. It will also help the teachers as well as placement cell in an institution to provide proper care
towards the improvement of students in the duration of course. We are using machine learning for the placement prediction. The
existing placement prediction model considers only academic performances of the students so that the prediction of the student
getting placed or not can be done. We cannot consider the placement of students just by their academic performances because some
students may be good at aptitude, technical and communication skills due to their low score in their academic that may tend to be
their drawback. For predicting the placement of a Student needs parameters like cgpa, logical and technical skills Academic
performances may be important but the model is design to predict the placements based on the parameters of the students and
segregating the total students placed data according to their streams to find out that in which stream placements are more. Using this
model we can predict the next year admission trends.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 784
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue XI Nov 2022- Available at www.ijraset.com
The attributes of the dataset include cumulative grade point, different subject scores and the type of company. Missing data values
were replaced with mean value for numeric data and mode value for nominal data [5].
Patel T., et al "Data Mining Techniques for Campus Placement Prediction in Higher Education." Indian J.Sci.Res. 14 (2) 2017. In
this paper, the creator had driven evaluations on the utilization of information digging frameworks for grounds position supposition
and use of WEKA programming for plan and execution. Different cutoff points which could be considered for figuring understudy
execution are the scholastic show, social limits, specific limits, capable plan and tries. Different pressing assessments like clear k-
mean, Farthest-first convergence, segregated assembling, moderate grouping were used for model turn of events. It was seen that the
time is taken for building clear k-mean, Farthest-first intersection point and bound gathering was just 0.02sec in regards to various
evened out squeezing (0.09 sec) and thickness based collecting (0.08 sec)[6].
V. PROPOSED METHODOLOGY
A. Algorithms
1) KNN Algorithm: KNN stands for K-nearest neighbor, it’s one of the supervised learning algorithm mostly used for
classification of data on the basis how its neighbor are classified. KNN stores all available cases and classifies new cases based
on a similarity measure. K in KNN is a parameter that refers to the number of the nearest neighbors to include in the majority
voting process.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 785
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue XI Nov 2022- Available at www.ijraset.com
2) AdaBoost: AdaBoost also called Adaptive Boosting is a technique in Machine Learning used as an Ensemble Method. The most
common algorithm used with AdaBoost is decision trees with one level that means with Decision trees with only 1 split. These
trees are also called Decision Stumps.
Formula for Calculating Weights:
W(xi,yi)=1/N ,i=1,2,3,….,N
3) Random Forest Algorithm: Random Forest is a popular machine learning algorithm that belongs to the supervised learning
technique. It can be used for both Classification and Regression problems in ML. It is based on the concept of ensemble
learning, which is a process of combining multiple classifiers to solve a complex problem and to improve the performance of
the model.
As the name suggests, "Random Forest is a classifier that contains a number of decision trees on various subsets of the given
dataset and takes the average to improve the predictive accuracy of that dataset." Instead of relying on one decision tree, the
random forest takes the prediction from each tree and based on the majority votes of predictions, and it predicts the final output.
How does Random Forest algorithm work?
Random Forest works in two-phase first is to create the random forest by combining N decision tree, and second is to make
predictions for each tree created in the first phase.
The Working process can be explained in the below steps and diagram:
Step-1: Select random K data points from the training set.
Step-2: Build the decision trees associated with the selected data points (Subsets).
Step-3: Choose the number N for decision trees that you want to build.
Step-4: Repeat Step 1 & 2.
Step-5: For new data points, find the predictions of each decision tree, and assign the new data points to the category that wins the
majority votes.
VI. CONCLUSION
Student Placement Predictor is a system which predicts student placement status using machine learning and in which stream
placements are more to predict the next year admission trends. We can use this predictions for counseling the students and their
parents, which stream has a higher scope in future.
REFERENCES
[1] “Prediction Model for Students Future Development by Deep Learning and TensorFlow Artificial Intelligence Engine” 2018 4th IEEE International Conference
on Information Management
[2] Dr. A. Padmapriya* (November 2012) Prediction Of Higher Education Admissibility Using Classification Algorithms International Journal of Advanced
Research in Computer Science and Software Engineering ISSN: 2277 128X
[3] H. Sabnani, M. More, P. Kudale, S. Janrao, “Prediction of Student Enrolment Using Data Mining Techniques”, International Research Journal of Engineering
and Technology (IRJET), 5(4), 1830-1833, 2018.
[4] Animesh,G.,M,Vignesh.,Bysani,P.,Naini,D., 2015. A Placement Prediction System Using K-Nearest Neighbors Classifier. 2nd International Conference on
Cognitive Computing and Information Processing (CCIP).
[5] Karan. P.,Prateek, B., 2015. “Application of Data mining in predicting student placement”, International conference on Green computing and Internet of things
(ICGCIoT).).
[6] Patel, T., Tamrakar, A. “Data Mining Techniques for Campus Placement Prediction in Higher Education.” Indian J.Sci.Res. 14 (2) 2017: 467-471.).
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 786