Salary Prediction-2
Salary Prediction-2
INTRODUCTION
Now days, one of the major reasons an employee switches a company is
the salary of the employee. Employees keep switching the company to get the
expected salary. And it results in loss for the company and to overcome this loss
we came with an idea what if the employee gets the desired/expected salary from
the Company or Organization. In this Competitive world everyone has a higher
expectation and goals.
1
The process of learning begins with observations or data, such as
examples, direct experience, or instruction, in order to look for patterns in data
and make better decisions in the future based on the examples that we provide.
The primary aim is to allow the computers learn automatically without human
intervention or assistance and adjust actions accordingly.
Machine learning algorithms are broadly classified into three divisions, namely;
Supervised learning, Unsupervised learning and Reinforcement learning.
2
data. Unsupervised learning is the training of machine using information
that is neither classified nor labelled and allowing the algorithm to act on
that information without guidance. Here the task of machine is to group
unsorted information according to similarities, patterns and differences
without any prior training of data. Unlike, supervised learning, no teacher
is provided that means no training will be given to the machine. Therefore,
machine is restricted to find the hidden structure in unlabelled data by our-
self.
• Reinforcement Learning:- Reinforcement learning is an area of
Machine Learning. Reinforcement. It is about taking suitable action to
maximize reward in a particular situation. It is employed by various
software and machines to find the best possible behaviour or path it should
take in a specific situation. Reinforcement learning differs from the
supervised learning in a way that in supervised learning the training data
has the answer key with it so the model is trained with the correct answer
itself whereas in reinforcement learning, there is no answer but the
reinforcement agent decides what to do to perform the given task. In the
absence of training dataset, it is bound to learn from its experience.
Reinforcement machine learning algorithms is a learning method that
interacts with its environment by producing actions and discovers errors
or rewards. Trial and error search and delayed reward are the most relevant
characteristics of reinforcement learning. This method allows machines
and software agents to automatically determine the ideal behaviour within
a specific context in order to maximize its performance. Simple reward
feedback is required for the agent to learn which action is best; this is
known as the reinforcement signal.
3
The project uses various regression techniques for predicting the salary of the
employees. The techniques are listed as follows.
4
REQUIRED TOOLS
5
CHAPTER 2
LITERATURE SURVEY
1) Susmita Ray," A Quick Review of Machine Learning Algorithms," 2019
Prediction Engine for predicting suitable salary for a job” 2018 Fourth
International Conference on Research in Computational Intelligence and
Communication Networks (ICRCICN) - focused on the problem of predicting
salary for job advertisements in which salary are not mentioned and also tried
to help fresher to predict possible salary for different companies in different
locations. The corner stone of this study is a dataset provided by ADZUNA.
model is well capable to predict precise value.
6
4) Phuwadol Viroonluecha, Thongchai Kaewkiriya,” Salary Predictor System
for Thailand Labour Workforce using Deep Learning” - used Deep learning
techniques to construct a model which predicts the monthly salary of job
seekers in Thailand solving a regression problem which is a numerical
outcome is effective. We used five-month personal profile data from well-
known job search website for the analysis. As a result, Deep learning model
has strong performance whether accuracy or process time by RMSE 0.774 x
104 and only 17 seconds for runtime.
7
CHAPTER 3
MERITS OF THE SYSTEM
1. Easily identifies trends and patterns: Machine Learning Models
can review large volumes of data and discover specific trends and patterns that
would not be apparent to humans. For instance, for an e-commerce website like
Amazon, it serves to understand the browsing behaviours and purchase histories
of its users to help cater to the right products, deals, and reminders relevant to
them. It uses the results to reveal relevant advertisements to them.
8
CHAPTER 4
ARCHITECTURAL FLOW OF THE SYSTEM
9
CHAPTER 5
DESCRIPTION OF MODULES
For the Salary Prediction Model, we will be using the python library
modules such as numpy, pandas, matplotlib & sklearn, though which we will
import different functions for computing the model. Further, we will use Flask
library which is one of the most important, as it connects our backend end code
with the front end.
10
clustering and dimensionality reduction. sklearn is used to build machine
learning models. It should not be used for reading the data, manipulating and
summarizing it. Components of scikit-learn:
• Supervised learning algorithms
• Cross-validation
• Feature extraction
11
CHAPTER 6
UML DIAGRAMS
1. ER Diagram: An entity relationship diagram (ERD) shows the
relationships of entity sets stored in a database. An entity in this context is an
object, a component of data. An entity set is a collection of similar entities.
These entities can have attributes that define its properties.
12
3. Activity Diagram: An activity diagram is a behavioural diagram i.e. it
depicts the behaviour of a system. An activity diagram portrays the control
flow from a start point to a finish point showing the various decision paths
that exist while the activity is being executed.
13
CHAPTER 7
IMPLEMENTATION OF THE SYSTEM
SOURCE CODE:
In[1]: import pandas as pd
dataset=pd.read_csv('/content/Salary_Data_SLR.csv')
In[2]: dataset
In[3]: X=dataset.iloc[:,0].values
In[4]: X
In[5]: X=X.reshape(-1,1)
In[6]: X
In[7]: Y=dataset.iloc[:,-1].values
In[8]: Y
In[9]: import plotly.express as px
fig=px.line(dataset,x="YearsExperience",y="Salary")
fig.show()
In[10]: from sklearn.model_selection import train_test_split
X_train,X_test,Y_train,Y_test=train_test_split(X,Y,test_size=0.2)
In[11]: X_train.shape
In[12]: X_test.shape
In[13]: Y_train.shape
In[14]: Y_test.shape
In[15]: from sklearn.linear_model import LinearRegression
regressor=LinearRegression()
regressor.fit(X_train,Y_train)
In[16]: regressor.coef_
In[17]: regressor.intercept_
In[18]: from sklearn.metrics import r2_score
Y_pred=regressor.predict(X_test)
14
print(r2_score(Y_pred,Y_test))
In[19]: yoe=float(input("Enter Years of Experience"))
regressor.predict([[yoe]])
1. The Data: Data collection is the first real step towards the real
development of a machine learning model, collecting data. This is a
critical step that will cascade in how good the model will be, the more and
better data that we get, the better our model will perform. Our dataset
named “survey_results_public” is a raw dataset. It means that a lot of pre-
processing is required so that all it becomes useful for evaluation. Our
dataset consists of 83439 rows and 48 features that will help us to predict
the sales of the house and is fairly a big dataset.
2. Loading the Data: We load the dataset into our notebook using the
pandas dataframe.
3. Data Pre-Processing: Our Next step is to convert our data set into best
possible format so that we can extract what all features are required to
predict the price of the house. This is where all cleaning of our data takes
place, be it treating the missing values, treating repetitive values, or
addition of different features according to our needs. Once they are
identified, there are several ways to deal with them:
Eliminating the samples or features with missing values. (we risk to delete
relevant information or too many samples). Imputing the missing values,
with some pre-built estimators such as the Imputer class from scikit learn.
15
4. Data Exploration: Further, we explore our data as much as possible to
know the features very well. We get to know the count of each features,
their mean values, standard deviation, min and max value etc.
16
Bar Plot for
Education Level
17
Bivariate Analysis: As the name suggests, bivariate analysis is the analysis of
2 features taken together. It is one of the simplest forms of statistical analysis,
used to find out if there is a relationship between two sets of values. It usually
involves the variables X and Y. Again, we randomly pick up any two features,
one pair at a time and analyse it using histograms, bar graphs, plots etc.
18
19
Feature Scaling: This is a crucial step in the preprocessing phase as the
majority of machine learning algorithms perform much better when
dealing with features that are on the same scale. The most common
techniques are:
9. Implementing the Model: Here comes the part where the actual
machine learning algorithms are being implemented. As stated above, we
are using Linear Regression Machine Learning Algorithm to predict the
house price under Chennai house price Prediction model.
20
10. Segregating Dependant and Independent Variables: Independent
variables (also referred to as Features) are the input for a process that is
being analyzes. Dependent variables are the output of the process.
For example:
y=f(x),Where,
x=independent variable
y= dependent variable
This means any changes in x will cause a change in the value of y. The change
can be negative or positive. In Our Model, we have “SALARY” as our target/
dependent variable and all other features are considered as independent variables.
11. Splitting the Data Set into Train and Test Dataset: We will split our data
in three parts: training, testing and validating sets. We train our model with
training data, evaluate it on validation data and finally, once it is ready to
use, test it one last time on test data. The ultimate goal is that the model
can generalize well on unseen data, in other words, predict accurate results
from new data, based on its internal parameters adjusted while it was
trained and validated.
In our Model, we have divided our dataset into a 70:30 ration, i.e., the training
data consists of 70% of the dataset while the testing data consists of the remaining
30% of the dataset. To split the data we use train_test_split function provided by
scikit-learn library.
21
12. Implementing Linear Regression:
i) Learning Phase: In Linear regression we are given a number of
predictor variables and a continuous response variable, and we try to
find a relationship between those variables that allows us to predict a
continuous outcome.
For example, given X and Y, we fit a straight line that minimize the
distance using methods to estimate the coefficients like Ordinary Least
Squares and Gradient
Descent between the sample points and the fitted line. We’ll use the
intercept and slope learned, that form the fitted line, to predict the
outcome of new data.
The formula for the straight line is y = B0 + B1x +u. Where x is the input, B1 is
the slope, B0 the y-intercept, u the residual and y is the value of the line at the
postion x.
The values available for being trained are B0 and B1, which are the values that
affect the position of the line, since the only other variables are x (the input and
y, the output (the residual is not considered). These values (B0 and B1) are the
“weights” of the predicting funtion.
22
These weights and other, called biases, are the parameters that will be arranged
together as matrixes.
The process is repeated, one iteration (or step) at a time. In each iteration the
initial random line moves closer to the ideal and more accurate one.
ii) Overfitting & Underfitting: One of the most important problems when considering the
training of models is the tension between optimization and generalization.
At the beginning of training, those two issues are correlated, the lower the
loss on training data, the lower the loss on test data. This happens while the
model is still underfitted: there is still learning to be done, it hasn’t been
modelled yet all the relevant parameters of the model.
There are two ways to avoid this overfitting, getting more data and
regularization.
23
• Getting more data is usually the best solution, a model trained on more
data will naturally generalize better.
• Regularization is done when the latter is not possible, it is the process of
modulating the quantity of information that the model can store or to add
constraints on what information it is allowed to keep. If the model can
only memorize a small number of patterns, the optimization will make it
to focus on the most relevant ones, improving the chance of generalizing
well.
• Regularization is done mainly by the following techniques:
24
CHAPTER-8
CONCLUSION
In today’s real world, it has become tough to store such huge data and extract
them for one’s own requirement. Also, the extracted data should be useful. The
system makes optimal use of the Linear Regression Algorithm. The system
makes use of such data in the most efficient way. The linear regression
algorithm helps to fulfil customers by increasing the accuracy of estate choice
and reducing the risk of investing in an estate.
FUTURE SCOPE
25
will provide the user to explore more graduates and reach an accurate decision.
More factors like training period that affect the job salary of a graduate shall be
added. In-depth details of every individual will be added to provide ample details
of a desired estate. This will help the system to run on a larger level
26