Data Science-Logbook
Data Science-Logbook
(Autonomous)
Accredited by NAAC with Grade ‘A’ and Recognized u/s 2(f) & 12(B) of UGC
Act An ISO 9001:2015, ISO 14001:2015 &4 5001:2018 Certified Institution
2
PROGRAM BOOK FOR
SHORT - TERM INTERNSHIP
JNTUGV UNIVERSITY
2024-25 YEAR
3
An Internship Report on
Bachelors of Technology
K. Mohan Rao
Submitted by:
4
Student’s Declaration
5
Official Certification
This is to certify that Pilla Sai Sowjanya ,Reg. No. 22U41A0539 has completed
his/her Internship in APSCHE/EXCELR Data Science/Machine Learning
Virtual internship under my supervision as a part of partial fulfilment of the
requirement for the Degree of Bachelors of Technology in the Department of
Computer Science & Engineering at Dadi Institute of Engineering & Technology.
Endorsements
Faculty Guide
Principal
6
Certificate from Intern Organization
This is to certify that Pilla Sai Sowjanya (Name of the intern) Reg. No
22U41A0539 of Dadi Institute of Engineering & Technology (Name of the
College) underwent internship in EXCELR- Data Science/Machine Learning
Virtual Internship (Name of the Intern Organization) from 20th May 2024 to
28th June 2024.
_ (Satisfactory/Not Satisfactory).
7
INTERNSHIP WORK SUMMARY
Modules Covered
1. Python Programming
2. Python Libraries for Data Science
3. SQL for Data Science
4. Mathematics for Data Science
5. Machine Learning
6. Introduction to Deep Learning - Neural Networks
For the project, we applied ensemble learning techniques to predict the sales of
products at Big Mart outlets. The project involved data cleaning, feature
engineering, and model building using algorithms such as Random Forest,
Gradient Boosting, and XG Boost. The final model aimed to improve the
accuracy of sales predictions, providing valuable insights for inventory
management and sales strategies.
Authorized signatory
Company name
8
Self-Assessment
For the project, we applied ensemble learning techniques to predict sales for Big
Mart outlets. We utilized Python programming and various data science libraries
to clean, manipulate, and analyze the data. The project involved feature
engineering, model training, and evaluation using ensemble methods such as
Random Forest, Gradient Boosting, and XG Boost.
Throughout this internship, we gained hands-on experience with key data science
tools and techniques, enhancing our skills in data analysis, statistical modelling,
and machine learning. The practical application of theoretical knowledge in a
real-world project was immensely valuable.
We are very satisfied with the work we have done, as it has provided us with
extensive knowledge and practical experience. This internship was highly
beneficial, allowing us to enrich our skills in data science and preparing us for
future professional endeavours. We are confident that the knowledge and skills
acquired during this internship will be of great use in our personal and
professional growth.
9
Acknowledgement
I also greatly thank all the trainers without whose training and feedback in this project would
stand nothing. In addition, I am grateful to all those who helped directly or indirectly for
completing this project work successfully.
10
TABLE OF 11CONTENTS
TABLE OF CONTENTS
8 WEEKLY LOG
12
THEORETICAL BACKGROUND OF THE STUDY
Data Science involves the study of data through statistical and computational
techniques to uncover patterns, make predictions, and gain valuable insights. It
encompasses data cleansing, data preparation, analysis, and visualization, aiming
to solve complex problems and inform business strategies.
13
TECHNOLOGY: Data Science applications in technology include natural
language processing (NLP) for understanding and generating human
language, image recognition and computer vision for analyzing and
interpreting visual data such as images and videos, autonomous vehicles
for making decisions based on real-time data from sensors, and
personalized user experiences in applications and websites based on user
behaviour and preferences.
14
MODULE 2 : PYTHON FOR DATA SCIENCE
1. INTRODUCTION TO PYTHON
Example:
DOMAIN USAGE
Detailed Explanation:
Python's syntax:
Variables in Python:
Example:
16
3. CONTROL FLOW STATEMENTS
Control flow statements in Python determine the order in which statements are
executed based on conditions or loops. Python provides several control flow
constructs:
Detailed Explanation:
Example:
Output:
for loop: Iterates over a sequence (e.g., list, tuple) or an iterable object.
while loop: Executes a block of code as long as a condition is true.
17
Example:
Output:
Example Explanation:
4. FUNCTIONS
Functions in Python are blocks of reusable code that perform a specific task. They
help in organizing code into manageable parts, promoting code reusability and
modularity.
Detailed Explanation:
18
o The body of the function is indented.
Example:
2. Function Call:
Example:
Functions can accept parameters (inputs) that are specified when the
function is called.
Parameters can have default values, making them optional.
19
Example:
Example Explanation:
5. DATA STRUCTURES
Python provides several built-in data structures that allow you to store and
organize data efficiently. These include lists, tuples, sets, and dictionaries.
Detailed Explanation:
1. Lists:
20
Example:
2. Tuples:
Example:
3. Sets:
Example:
4. Dictionaries:
21
Example:
Example Explanation:
Lists: Used for storing ordered collections of items that can be changed or
updated.
Tuples: Similar to lists but immutable, used when data should not change.
Sets: Used for storing unique items where order is not important.
Dictionaries: Used for storing key-value pairs, allowing efficient lookup
and modification based on keys.
File handling in Python allows you to perform various operations on files, such
as reading from and writing to files. This is essential for tasks involving data
storage and manipulation.
Detailed Explanation:
Files are opened using the open() function, which returns a file object.
Use the close() method to close the file once operations are done.
Example:
22
2. Reading from Files:
Example:
3. Writing to Files:
Example:
Example Explanation:
Opening and Closing Files: Files are opened using open() and closed
using close() to release resources.
Reading from Files: Methods like read(), readline(), and readlines() allow
reading content from files, handling file operations efficiently.
Writing to Files: Use write() or writelines() to write data into files,
managing file contents as needed.
23
7. ERRORS AND EXCEPTION HANDLING
Detailed Explanation:
1. Types of Errors:
o Syntax Errors: Occur when the code violates the syntax rules of
Python. These are detected during compilation.
o Exceptions: Occur during the execution of a program and can be
handled using exception handling.
2. Exception Handling:
Example:
3. Raising Exceptions:
24
Example:
Example Explanation:
Detailed Explanation:
Class: Blueprint for creating objects. Defines attributes (data) and methods
(functions) that belong to the class.
Object: Instance of a class. Represents a specific entity based on the class
blueprint.
25
Example:
2. Encapsulation:
Bundling of data (attributes) and methods that operate on the data into a
single unit (class).
Access to data is restricted to methods of the class, promoting data security
and integrity.
3. Inheritance:
26
Example:
4. Polymorphism:
Example:
Example Explanation:
Classes and Objects: Classes define the structure and behavior of objects,
while objects are instances of classes with specific attributes and methods.
27
Encapsulation: Keeps the internal state of an object private, controlling
access through methods.
Inheritance: Allows a new class to inherit attributes and methods from an
existing class, facilitating code reuse and extension.
Polymorphism: Enables flexibility by using the same interface (method
name) for different data types or classes, allowing for method overriding
and overloading.
1. NUMPY
Detailed Explanation:
Arrays in NumPy:
o NumPy's main object is the homogeneous multidimensional array
(ndarray), which is a table of elements (usually numbers), all of the
same type, indexed by a tuple of non-negative integers.
o Arrays are created using np.array() and can be manipulated for
various mathematical operations.
Example:
NumPy Operations:
o NumPy provides a wide range of mathematical functions such as
np.sum(), np.mean(), np.max(), np.min(), etc., which operate
element-wise on arrays or perform aggregations across axes.
28
Example:
Broadcasting:
o Broadcasting is a powerful mechanism that allows NumPy to work
with arrays of different shapes when performing arithmetic
operations.
Example:
Example Explanation:
2. PANDAS
29
Detailed Explanation:
Example:
Basic Operations:
o Indexing and Selection: Use loc[] and iloc[] for label-based and
integer-based indexing respectively.
o Filtering: Use boolean indexing to filter rows based on conditions.
o Operations: Apply operations and functions across rows or
columns.
Example:
Data Manipulation:
o Adding and Removing Columns: Use assignment
(df['New_Column'] = ...) or drop() method.
30
o Handling Missing Data: Use dropna() to drop NaN values or
fillna() to fill NaN values with specified values.
Example:
Example Explanation:
DataFrame and Series: Pandas DataFrame is used for tabular data, while
Series is used for one-dimensional labelled data.
o Basic Operations: Perform indexing, selection, filtering, and
operations on Pandas objects to manipulate and analyze data.
Detailed Explanation:
1. Matplotlib:
o Basic Plotting: Create line plots, scatter plots, bar plots, histograms,
etc., using plt.plot(), plt.scatter(), plt.bar(), plt.hist(), etc.
o Customization: Customize plots with labels, titles, legends, colors,
markers, and other aesthetic elements.
o Subplots: Create multiple plots within the same figure using
plt.subplots().
31
Example:
2. Seaborn:
Example:
Example Explanation:
1. INTRODUCTION TO SQL
Detailed Explanation:
Example:
Example:
Example:
33
Example:
3. Querying Data:
Example:
SQL joins are used to combine rows from two or more tables based on a related
column between them. There are different types of joins:
INNER JOIN:
o Returns rows when there is a match in both tables based on the join
condition.
Example:
Example:
Example:
Example Explanation:
INNER JOIN: Returns rows where there is a match in both tables based
on the join condition (customer_id).
LEFT JOIN: Returns all rows from the left table (orders) and the matched
rows from the right table (customers). Returns NULL if there is no match.
RIGHT JOIN: Returns all rows from the right table (customers) and the
matched rows from the left table (orders). Returns NULL if there is no
match.
FULL OUTER JOIN: Returns all rows when there is a match.
35
MODULE 4 : MATHEMATICS FOR DATA SCIENCE
. 1. MATHEMATICAL FOUNDATIONS
Mathematics forms the backbone of data science, providing essential tools and
concepts for understanding and analyzing data.
Detailed Explanation:
1. Linear Algebra:
Example:
2. Calculus:
36
Example:
Example Explanation:
Probability and statistics are fundamental in data science for analyzing and
interpreting data, making predictions, and drawing conclusions.
Detailed Explanation:
1. Probability Basics:
37
Example:
2. Descriptive Statistics:
Descriptive statistics are used to summarize and describe the basic features of
data. They provide insights into the central tendency, dispersion, and shape of a
dataset.
Detailed Explanation:
Example:
2. Measures of Dispersion:
38
Variance: Measures how far each number in the dataset is from the
mean.
Standard Deviation: Square root of the variance; it indicates the
amount of variation or dispersion of a set of values.
Range: The difference between the maximum and minimum values
in the dataset.
Example:
Example:
Example Explanation:
39
3. PROBABILITY DISTRIBUTIONS
Detailed Explanation:
1.Normal Distribution:
Example:
2. Binomial Distribution:
40
Example:
3. Poisson Distribution:
Example:
Example Explanation:
Detailed Explanation:
42
o Machine Learning: Relies on algorithms and statistical models to
perform tasks; requires feature engineering and domain expertise.
o Deep Learning: Subset of ML using artificial neural networks with
multiple layers to learn representations of data; excels in handling
large volumes of data and complex tasks like image and speech
recognition.
Supervised learning involves training a model on labeled data, where each data
point is paired with a corresponding target variable (label). The goal is to learn a
mapping from input variables (features) to the output variable (target) based on
the input-output pairs provided during training.
Classification
Algorithms:
1. Logistic Regression
43
:
2. Decision Trees
Example:
44
3. Random Forest
45
Example:
Support Vector Machines (SVM) are robust supervised learning models used for
classification and regression tasks. They excel in scenarios where the data is not
linearly separable by transforming the input space into a higher dimension.
Detailed Explanation:
2. Types of SVM
o C-Support Vector Classification (SVC): SVM for classification
tasks, maximizing the margin between classes.
46
o Nu-Support Vector Classification (NuSVC): Similar to SVC but
allows control over the number of support vectors and training
errors.
o Support Vector Regression (SVR): SVM for regression tasks,
fitting a hyperplane within a margin of tolerance.
3. Advantages of SVM
4. Applications of SVM
47
Hyperplane and Support Vectors: SVMs find the optimal hyperplane that
maximizes the margin between classes, with support vectors influencing its
position.
5. Decision Trees
Decision Trees are versatile supervised learning models used for both
classification and regression tasks. They create a tree-like structure where each
internal node represents a "decision" based on a feature, leading to leaf nodes that
represent the predicted outcome.
Detailed Explanation:
48
3. Advantages of Decision Trees
Regression Analysis
49
1. Linear Regression
Detailed Explanation:
50
Example (Simple Linear Regression):
2. Naive Bayes
Detailed Explanation:
52
Scalability: Handles high-dimensional data well, such as text
classification.
Support Vector Machines (SVM) are versatile supervised learning models that
can be used for both classification and regression tasks. In regression, SVM aims
to find a hyperplane that best fits the data, while maximizing the margin from the
closest points (support vectors).
Detailed Explanation:
53
3. Advantages of SVM for Regression
Example Explanation:
Kernel Trick: SVM uses kernel functions to transform the input space
into a higher-dimensional space where data points can be linearly
separated.
54
Loss Function: SVM minimizes the error between predicted and actual
values while maximizing the margin around the hyperplane.
Detailed Explanation:
55
Example: :
56
Example Explanation:
Detailed Explanation:
57
3. Advantages of Gradient Boosting for Regression
Example Explanation:
58
Applications: Gradient Boosting is widely used in domains requiring high
predictive accuracy and handling complex data relationships.
Unsupervised learning algorithms are used when we only have input data (X) and
no corresponding output variables. The algorithms learn to find the inherent
structure in the data, such as grouping or clustering similar data points together.
Detailed Explanation:
59
4. Applications of Unsupervised Learning
o Customer Segmentation: Grouping customers based on their
purchasing behaviors.
o Anomaly Detection: Identifying unusual patterns in data that do not
conform to expected behavior.
o Recommendation Systems: Suggesting items based on user
preferences and similarities.
Detailed Explanation:
Example :
3. Advantages of PCA
4.Applications of PCA
Clustering techniques
K-Means Clustering
Detailed Explanation:
62
3.Advantages of K-Means Clustering
Hierarchical Clustering
Detailed Explanation:
63
o Distance Matrix: Compute a distance matrix that measures the
distance between each pair of data points.
o Merge or Split: Iteratively merge or split clusters based on their
distances until the desired number of clusters is achieved or a
termination criterion is met.
o Dendrogram: Visual representation of the clustering process,
showing the order and distances of merges or splits.
64
MODULE 6 : INTRODUCTION TO DEEP LEARNING
Detailed Explanation:
Example Explanation:
Neuron:
Activation Function:
Layer:
Backpropagation:
Loss Function:
Gradient Descent:
66
Optimization technique used to minimize the loss function by iteratively
adjusting weights in the direction of the negative gradient.
Batch Size:
Epoch:
One complete pass through the entire training dataset during the training
of a neural network.
Learning Rate:
Parameter that controls the size of steps taken during gradient descent. It
affects how quickly the model learns and converges to optimal weights.
Overfitting:
Condition where a model learns to memorize the training data rather than
generalize to new, unseen data. Regularization techniques help mitigate
overfitting.
Underfitting:
Dropout:
67
Neural network architecture designed for sequential data processing,
where connections between neurons can form cycles. RNNs are suitable
for tasks like time series prediction and natural language processing.
1. Feedforward Process:
o Input Propagation: Input data is fed into the input layer of the
neural network.
o Forward Pass: Data flows through the network layer by layer.
Each neuron in a layer receives inputs from the previous layer,
computes a weighted sum, applies an activation function, and
passes the result to the next layer.
o Output Generation: The final layer (output layer) produces
predictions or classifications based on the learned representations
from the hidden layers.
2. Training Process:
o Loss Calculation: Compares the network's output with the true
labels to compute a loss (error) value using a loss function (e.g.,
Mean Squared Error for regression, Cross-Entropy Loss for
classification).
o Backpropagation: Algorithm used to minimize the loss by
adjusting weights backward through the network. It computes
gradients of the loss function with respect to each weight using the
chain rule of calculus.
o Gradient Descent: Optimization technique that updates weights in
the direction of the negative gradient to reduce the loss, making the
network more accurate over time.
69
o Epochs and Batch Training: Training involves multiple passes
(epochs) through the entire dataset, with updates applied in batches
to improve training efficiency and generalization.
3. Model Evaluation and Deployment:
o Validation: After training, the model's performance is evaluated on
a separate validation dataset to assess its generalization ability.
o Deployment: Once validated, the trained model can be deployed to
make predictions or classifications on new, unseen data in real-
world applications.
71
5. Generative Adversarial Networks (GAN)
72
PROJECT WORK
PROJECT OVERVIEW
Data Description: The dataset for this project includes annual sales records for
2013, encompassing 1559 products across ten different stores located in various
cities. The dataset is rich in attributes, offering valuable insights into customer
preferences and product performance.
Key Objectives
Learning Objectives:
Methodology
3. Feature Engineering:
Model Development:
Ensemble Techniques:
o Explore model stacking and blending to improve prediction
accuracy.
o Model Evaluation and Selection:
o Assess model performance using appropriate metrics.
o Select the most effective model or ensemble for deployment.
Expected Outcomes
75
o Multivariate Adaptive Regression Splines (MARS): MARS
excelled in handling interactions between features and provided
robust performance by fitting piecewise linear regressions.
o Ensemble Techniques (Model Stacking and Model Blending):
By combining predictions from multiple models, ensemble
techniques delivered the best performance. Model stacking, in
particular, improved accuracy by leveraging the strengths of
individual models.
2. Key Findings:
o Feature Importance: Through various models, features such as
item weight, item fat content, and store location were consistently
identified as significant predictors of sales.
o Customer Preferences: Analysis revealed that products with
lower fat content had higher sales in urban stores, indicating a
health-conscious consumer base in these areas.
o Store Performance: Certain stores consistently outperformed
others, suggesting potential areas for targeted marketing and
inventory strategies.
3. Best-Performing Model:
Recommendations:
1. Inventory Management:
76
o Utilize the insights from the sales forecasts to optimize inventory
levels, ensuring high-demand products are adequately stocked to
meet customer needs while reducing excess inventory for low-
demand items.
2. Targeted Marketing:
o Implement targeted marketing strategies based on customer
preferences identified in the analysis. For example, promote low-
fat products more aggressively in urban stores where they are more
popular.
5. Employee Training:
o Train store managers and staff on the use of sales forecasts and
data-driven decision-making. Empowering employees with these
insights can lead to better in-store execution and customer service.
77
ACTIVITY LOG FOR FIRST WEEK
78
WEEKLY REPORT
WEEK - 1 (From Dt 20 May 2024 to Dt 24 May 2024)
Objective of the Activity Done: The first week aimed to introduce the students
to the fundamentals of Data Science, covering program structure, key concepts,
applications, and an overview of various modules such as Python, SQL, Data
Analytics, Statistics, Machine Learning, and Deep Learning.
Detailed Report: During the first week, the training sessions provided a
comprehensive introduction to the Data Science internship program. On the first
day, students were oriented on the program flow, schedule, and objectives. They
learned about the definition and significance of Data Science in today's data-
driven world.
The following day, students explored various applications and real-world use
cases of Data Science across different industries, helping them understand its
practical implications and benefits. Mid-week, the focus was on basic
definitions and differences between key terms like Data Science, Data
Analytics, and Business Intelligence, ensuring a solid foundational
understanding.
Towards the end of the week, students were introduced to the different modules
of the course, including Python, SQL, Data Analytics, Statistics, Machine
Learning, and Deep Learning. These sessions provided an overview of each
module's importance and how they contribute to the broader field of Data
Science.
By the end of the week, students had a clear understanding of the training
program's structure, fundamental concepts of Data Science, and the various
applications and use cases across different industries. They were also familiar
with the key modules to be studied in the coming weeks, laying a strong
foundation for more advanced learning.
79
ACTIVITY LOG FOR SECOND WEEK
80
WEEKLY REPORT
WEEK - 2 (From Dt 27 May 2024 to Dt 31 May 2024)
Learning Outcomes:
81
ACTIVITY LOG FOR THIRD WEEK
82
WEEKLY REPORT
WEEK - 3 (From Dt 03 June 2024 to Dt 07 June 2024 )
Objective of the Activity Done: The fourth week aimed to introduce students
to Object-Oriented Programming (OOP) concepts in Python, Python libraries
essential for Data Science (NumPy and Pandas), and foundational SQL
concepts. Students learned practical implementation of OOP principles,
numerical operations using NumPy, data manipulation with Pandas dataframes,
and basic SQL commands for database management.
Detailed Report:
83
Learning Outcomes:
84
ACTIVITY LOG FOR FOURTH WEEK
85
WEEKLY REPORT
WEEK - 4 (From Dt 10 June 2024 to Dt 14 June 2024)
Objective of the Activity Done: The focus of the third week was to delve into
SQL, advanced SQL queries, and database operations for data analysis.
Additionally, the week covered fundamental mathematics for Data Science,
including descriptive statistics, inferential statistics, hypothesis testing,
probability measures, and distributions essential for data analysis and decision-
making.
Detailed Report:
Learning Outcomes:
87
ACTIVITY LOG FOR FIFTH WEEK
88
WEEKLY REPORT
WEEK - 5 (From Dt 17 June 2024 to Dt 21 June 2024)
Objective of the Activity Done: The fifth week focused on Machine Learning
fundamentals, covering supervised and unsupervised learning techniques, model
evaluation metrics, and hyperparameter tuning. Students gained a
comprehensive understanding of different types of Machine Learning,
algorithms used for both classification and regression, and techniques for
feature importance and dimensionality reduction.
Detailed Report:
Learning Outcomes:
89
Acquired knowledge of popular algorithms such as decision trees,
random forests, and SVM for both classification and regression tasks.
Learned methods for feature importance assessment and dimensionality
reduction in unsupervised learning.
Gained proficiency in evaluating model performance using metrics and
techniques for hyperparameter tuning to improve model accuracy and
effectiveness.
Learned about ensemble methods (bagging, boosting, stacking) and their
application in combining multiple models for improved predictive
performance.
Gained an introduction to Deep Learning, understanding its applications
and advantages.
Explored basic terminology and types of neural networks, laying the
foundation for deeper study in Deep Learning
Acquired knowledge of popular algorithms such as decision trees,
random forests, and SVM for both classification and regression tasks.
Learned methods for feature importance assessment and dimensionality
reduction in unsupervised learning.
Gained proficiency in evaluating model performance using metrics and
techniques for hyperparameter tuning to improve model accuracy and
effectiveness.
Learned about ensemble methods (bagging, boosting, stacking) and their
application in combining multiple models for improved predictive
performance.
90
ACTIVITY LOG FOR SIXTH WEEK
91
WEEKLY REPORT
WEEK - 6 (From Dt 24 June 2024 to Dt 28 June 2024)
Objective of the Activity Done: The sixth week focused on practical aspects of
Machine Learning (ML) and introduction to Deep Learning (DL). Topics
included the ML project lifecycle, data preparation, exploratory data analysis
(EDA), model development and evaluation, ensemble methods (bagging,
boosting, stacking), introduction to DL and neural networks.
Detailed Report:
Learning Outcomes:
92
Learned about ensemble methods (bagging, boosting, stacking) and their
application in combining multiple models for improved predictive
performance.
Gained an introduction to Deep Learning, understanding its applications
and advantages.
Explored basic terminology and types of neural networks, laying the
foundation for deeper study in Deep Learning.
Acquired knowledge of popular algorithms such as decision trees,
random forests, and SVM for both classification and regression tasks.
Learned methods for feature importance assessment and dimensionality
reduction in unsupervised learning.
Gained an introduction to Deep Learning, understanding its applications
and advantages.
Explored basic terminology and types of neural networks, laying the
foundation for deeper study in Deep Learning
Acquired knowledge of popular algorithms such as decision trees,
random forests, and SVM for both classification and regression tasks.
Learned methods for feature importance assessment and dimensionality
reduction in unsupervised learning
93
Student Self Evaluation of the Short-Term Internship
Date of Evaluation:
1 Oral communication 1 2 3 4 5
2 Written communication 1 2 3 4 5
3 Proactiveness 1 2 3 4 5
4 Interaction ability with community 1 2 3 4 5
5 Positive Attitude 1 2 3 4 5
6 Self-confidence 1 2 3 4 5
7 Ability to learn 1 2 3 4 5
8 Work Plan and organization 1 2 3 4 5
9 Professionalism 1 2 3 4 5
10 Creativity 1 2 3 4 5
11 Quality of work done 1 2 3 4 5
12 Time Management 1 2 3 4 5
13 Understanding the Community 1 2 3 4 5
14 Achievement of Desired Outcomes 1 2 3 4 5
15 OVERALL PERFORMANCE 1 2 3 4 5
94
Student Name: Registration No:
Please note that your evaluation shall be done independent of the Student’s
self- evaluation.
1 Oral communication 1 2 3 4 5
2 Written communication 1 2 3 4 5
3 Proactiveness 1 2 3 4 5
4 Interaction ability with community 1 2 3 4 5
5 Positive Attitude 1 2 3 4 5
6 Self-confidence 1 2 3 4 5
7 Ability to learn 1 2 3 4 5
8 Work Plan and organization 1 2 3 4 5
9 Professionalism 1 2 3 4 5
10 Creativity 1 2 3 4 5
11 Quality of work done 1 2 3 4 5
12 Time Management 1 2 3 4 5
13 Understanding the Community 1 2 3 4 5
14 Achievement of Desired Outcomes 1 2 3 4 5
15 OVERALL PERFORMANCE 1 2 3 4 5
95
PHOTOS
96
97
INTERNAL ASSESSMENT STATEMENT
University: JNTUGV
1. Activity Log 10
2. Internship Evaluation 30
3. Oral Presentation 10
GRANDTOTAL 50
98
EXTERNAL ASSESSMENT STATEMENT
Programme of Study:
Year of Study:
Group:
University:
Maximum Marks
Sl.No Evaluation Criterion
Marks Awarded
1. Internship Evaluation 80
For the grading giving by the Supervisor of
2. 20
the Intern Organization
3. Viva-Voce 50
TOTAL 150
GRAND TOTAL (EXT. 50 M + INT. 100M) 200
99
100
101