PAGE
S.No DATE NAME OF THE PROGRAM SIGN
NO
PAGE
S.No DATE NAME OF THE PROGRAM SIGN
NO
Date: APRIORI ALGORITHM TO EXTRACT ASSOCIATION RULE OF
EX.No: 01 DATA MINING.
AIM:
To Write a Program for Implement Apriori algorithm to extract
association rule of data mining.
ALGORITHM:
Step 1: Start the Program
Step 2: Install and Load Package
Step 3: Read the transaction dataset into R.
Step 4: Use the apriori() function to apply the Apriori algorithm to the
transaction dataset.
Step 5: Display the Result.
Step 6: Stop the Program
PROGRAM:
install.packages("arules")
library(arules)
transactions <- read.transactions(file.choose(), format = "basket", sep
= ",", header = TRUE)
summary(transactions)
rules <- apriori(transactions, parameter = list(support = 0.1,
confidence = 0.5))
inspect(rules)
OUTPUT:
RESULT:
Thus the R program has been executed successfully and Output ve
Date:
K-MEANS CLUSTERING TECHNIQUE
EX.No: 02
AIM:
To Write a Program for Implement k-means clustering technique.
ALGORITHM:
Step 1: Start the Program
Step 2: Load the Necessary Packages
Step 3: Load the USArrests dataset
Step 4: Remove any rows with missing values
Step 5: Scale each variable in the dataset to have a mean of 0 and a standard
deviation of 1
Step 6: Perform K-Means Clustering with Optimal K
Step 7: Perform k-means clustering with k = 4 clusters
Step 8: Display the Result.
Step 9: Stop the Program
PROGRAM:
library(cluster)
df <- USArrests
df <- na.omit(df)
df <- scale(df)
head(df)
set.seed(1)
km <- kmeans(df, centers = 4, nstart = 25)
km
OUTPUT:
RESULT:
Thus the R program has been executed successfully and Output verified
verified.
Date:
HIERARCHAL CLUSTERING
EX.No: 03
AIM:
To Write a Program for Implement any one Hierarchal Clustering.
ALGORITHM:
Step 1: Start the Program
Step 2: Loads the cluster library.
Step 3: Load the USArrests dataset
Step 4: Removes any rows with missing values.
Step 5: Scales the data to have zero mean and unit variance.
Step 6: Performs hierarchical clustering using the Ward method (method =
"ward") and stores the result in clust.
Step 7: Plots the dendrogram using the pltree() function with specific
parameters (cex, hang, main) for customization.
Step 8: Display the Result.
Step 9: Stop the Program
PROGRAM:
library(cluster)
df <- USArrests
df <- na.omit(df)
df <- scale(df)
clust <- agnes(df, method = "ward")
pltree(clust, cex = 0.6, hang = -1, main = "Dendrogram")
OUTPUT:
RESULT:
Thus the R program has been executed successfully and Output verified
verified.
Date:
CLASSIFICATION ALGORITHM
EX.No: 04
AIM:
To Write a Program for Implement Classification algorithm.
ALGORITHM:
Step 1: Start the Program
Step 2: Load the class package
Step 3: Split the iris dataset into training and testing sets
Step 4: Perform k-nearest neighbors classification with k = 5
Step 5: Compare the predicted labels with the actual labels
Step 6: Display the Result.
Step 7: Stop the Program
PROGRAM:
library(class)
set.seed(123)
ind <- sample(2, nrow(iris), replace = TRUE, prob = c(0.7, 0.3))
train <- iris[ind == 1, ]
test <- iris[ind == 2, ]
pred <- knn(train[, -5], test[, -5], train[, 5], k = 5)
table(pred, test[, 5])
OUTPUT:
pred setosa versicolor virginica
setosa 15 0 0
versicolor 0 11 1
virginica 0 3 14
RESULT:
Thus the R program has been executed successfully and Output verified.
Date:
DECISION TREE
EX.No: 05
AIM:
To Write a Program for Implement Decision Tree.
ALGORITHM:
Step 1: Start the Program
Step 2: Loads the rpart library.
Step 3: Uses the Iris dataset, a built-in dataset in R.
Step 4: Splits the dataset into training and testing sets.
Step 5: Builds a decision tree model using the rpart() function.
Step 6: Plots the decision tree.
Step 7: Calculates the accuracy of the model.
Step 8: Display the Result.
Step 9: Stop the Program
PROGRAM:
library(rpart)
data(iris)
set.seed(123)
train_indices <- sample(1:nrow(iris), 0.7 * nrow(iris))
train_data <- iris[train_indices, ]
test_data <- iris[-train_indices, ]
tree_model <- rpart(Species ~ ., data = train_data, method = "class")
summary(tree_model)
plot(tree_model)
text(tree_model)
predictions <- predict(tree_model, test_data, type = "class")
conf_matrix <- table(predictions, test_data$Species)
print(conf_matrix)
accuracy <- sum(diag(conf_matrix)) / sum(conf_matrix)
print(paste("Accuracy:", accuracy))
OUTPUT:
RESULT:
Thus the R program has been executed successfully and Output verified
verified.
Date:
EX.No: 06 LINEAR REGRESSION.
AIM:
To Write a Program for Linear Regression.
ALGORITHM:
Step 1: Start the Program
Step 2: Create a Sample data
Step 3: Perform linear regression
Step 4: Print summary of the regression model
Step 5: Plot the data and regression line
Step 6: Display the Result.
Step 7: Stop the Program
PROGRAM:
x <- c(1, 2, 3, 4, 5)
y <- c(2, 3, 4, 5, 6)
linear_model <- lm(y ~ x)
summary(linear_model)
plot(x, y, main = "Linear Regression", xlab = "x", ylab = "y")
abline(linear_model, col = "red")
OUTPUT:
RESULT:
Thus the R program has been executed successfully and Output verified.
verified
Date:
EX.No: 07 DATA VISUALIZATION.
AIM:
To Write a Program for Data Visualization.
ALGORITHM:
Step 1: Start the Program
Step 2: Import ggplot2 library
Step 3: Load dataset mtcars
Step 4: Create ggplot object with aes(x = wt, y = mpg)
Step 5: Add scatter plot layer with geom_point()
Step 6: Set labels for title, x-axis, and y-axis using labs()
Step 7: Customize the theme with theme_minimal()
Step 8: Display the plot using print()
Step 9: Stop the Program
PROGRAM:
library(ggplot2)
data(mtcars)
ggplot(mtcars, aes(x = wt, y = mpg)) +
geom_point() +
labs(title = "Car Weight vs. Miles per Gallon",
x = "Weight (1000 lbs)",
y = "Miles per Gallon") +
theme_minimal()
OUTPUT:
RESULT:
Thus the R program has been executed successfully and Output verified
verified.