0% found this document useful (0 votes)
31 views8 pages

Com 327 Week 3 Practical Activity

Computer Science
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views8 pages

Com 327 Week 3 Practical Activity

Computer Science
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 8

WEEK 3 PRACTICAL ACTIVITY

Understanding Problem Solving Techniques Using Formal and Informal Language in AI

In the context of AI, problem-solving techniques can be expressed using both formal and
informal language, depending on the audience, context, and the level of detail required. Here's
how problem-solving techniques can be explained using both types of language:

(A) Formal Language (Technical):

(a) Problem Statement (Formal Language):

In a formal problem-solving approach, the problem is defined precisely and often


includes specific technical terms, constraints and parameters.

For example:

(I) The task is to implement a convolutional neural network (CNN) for image
classification. The input dataset consists of 50,000 labeled images, divided into
ten classes. The goal is to achieve an accuracy of at least 95% on a separate
validation dataset."

(II) "Given a set of n integers, find the two numbers that sum to a target value k."

(b) Algorithm Description (Formal Language):

Technical details and precise algorithms are described using formal language. This
may include mathematical equations and coding syntax: It is often expressed with
pseudocode, flowcharts, or programming languages.

For example:

(I) // Initialize CNN architecture with specific layers and parameters.

// Preprocess the input images by normalizing pixel values.

// Split the dataset into training and validation sets.


// Train the CNN using stochastic gradient descent (SGD) with a
specific learning rate and batch size.

// Evaluate the model's performance using cross-entropy loss and


accuracy metrics.

(II) // Initialize an empty hash table.

// For each element x in the input array:

// Calculate the complement (k - x).

// If the complement exists in the hash table, return (x, complement).

// Otherwise, add x to the hash table.

// Return "No solution found."

(c) Evaluation Metrics (Formal Language):

When discussing the evaluation of AI models, formal language is used to describe


metrics and performance criteria. It involves using mathematical equations, formulas,
especially in scientific or engineering contexts.

For Example:

(I) "The model achieved an accuracy of 96.5% on the validation set,


surpassing the specified goal of 95%. Additionally, the cross-entropy loss was
reduced to 0.15."

(B) Informal Language:

Informal language is less structured and more conversational. It's often used in everyday
communication and is suitable for explaining problem-solving techniques to a non-
technical audience or for simplifying complex concepts. Here's how problem-solving
techniques can be expressed informally:

(a) Problem Statement (Informal Language):

When explaining the problem in a non-technical context, plain


language is used to make it accessible to a broader audience. It
may involve using everyday examples or analogies.

For example:

(i) "Imagine you have a list of numbers, and you want to find
two numbers in the list that add up to a specific value. How
would you go about doing that?"

(ii) "We want the computer to recognize what's in pictures. We


have a bunch of pictures, and we want the computer to tell
us if it's a cat, a dog, or something else."

(b) Solution Strategy:

Informal language can describe as a more casual and approachable


way of presenting high-level strategy or approach for solving the
problem without getting into detailed steps.

For example:

i. "To solve this problem, we can start by looking at each number in


the list one by one and checking if there's another number in the
list that, when added to the current number, gives us the target
sum."

ii. "To teach the computer, we'll show it lots of pictures of cats and
dogs and tell it what's in each picture. It will start to notice
patterns, like the shape of cat ears or the fur of a dog. Then, when
we give it a new picture, it will guess what's in it based on what it
learned."

( c) Evaluation (Informal Language): In non-technical terms, the evaluation of


AI models can be described simply by using real-life scenarios or
anecdotes to illustrate problem-solving concepts:

For Example

(i) "We tested the computer on some new pictures it had never
seen before. It got most of them right, like 95 out of 100.
That means it's doing a good job!"

Simple Problem Suitable for AI Solution

A smart spam email detection system.

In AI this process involves a combination of data analysis, pattern recognition, and


decision-making.

Problem Statement:

Given a collection of emails, the task is to automatically distinguish between


legitimate emails (ham) and unsolicited or irrelevant emails (spam).

AI Solution:

AI, particularly machine learning, can be employed to create a spam filter that can
accurately categorize incoming emails as either spam or not spam. Here's how an
AI solution might work:
(i) Data Collection: Gather a dataset of labeled emails, with some marked
as spam and others as ham. These emails should cover a range of
email types and content.

(ii) Feature Extraction: Extract relevant features from the email content
and metadata. These features might include keywords, sender
information, message structure, and more.

(iii) Training a Machine Learning Model: Use a machine learning


algorithm (e.g., Naive Bayes, Support Vector Machine, or deep
learning techniques like neural networks) to train a model on the
labeled dataset. The model learns to identify patterns that distinguish
spam from legitimate emails.

(iv) Model Evaluation: Assess the model's performance using metrics like
accuracy, precision, recall, and F1-score on a separate test dataset.
Fine-tune the model to improve its performance.

(v) Deployment: Integrate the trained model into an email client or server.
When new emails arrive, the model automatically classifies them as
spam or ham based on the learned patterns.

(vi) Continuous Learning: The AI system can continue to improve by


collecting feedback from users. Misclassified emails can be used to
retrain the model and enhance its accuracy over time.

Example:

Simple Python code example for implementing a spam email detection


system using a machine learning approach.
import numpy as np

from sklearn.feature_extraction.text import CountVectorizer

from sklearn.model_selection import train_test_split

from sklearn.naive_bayes import MultinomialNB

from sklearn.metrics import accuracy_score, classification_report

# Sample dataset with labeled emails (0 for ham, 1 for spam)

emails = [

("Hello, this is a legitimate email.", 0),

("Get rich quick! Win a million dollars!", 1),

("Meeting agenda for tomorrow's conference.", 0),

("Claim your prize now!", 1),

# Add more labeled emails here

# Separate the emails and labels

X, y = zip(*emails)

# Convert text data into numerical features using CountVectorizer

vectorizer = CountVectorizer()
X = vectorizer.fit_transform(X)

# Split the dataset into training and testing sets

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,


random_state=42)

# Create and train a Naive Bayes classifier

classifier = MultinomialNB()

classifier.fit(X_train, y_train)

# Make predictions on the test set

y_pred = classifier.predict(X_test)

# Evaluate the model

accuracy = accuracy_score(y_test, y_pred)

print(f"Accuracy: {accuracy:.2f}")

# Generate a classification report

report = classification_report(y_test, y_pred)

print("Classification Report:\n", report)


Summary

(i) We create a dataset of labeled emails, with 0 representing legitimate emails (ham) and
1 representing spam emails.

(ii) We use the CountVectorizer to convert the email text data into numerical features.

(iii) The dataset is split into training and testing sets using train_test_split.

(iv) We create a Multinomial Naive Bayes classifier and train it on the training data.

(v) The classifier is used to make predictions on the test set, and we calculate the
accuracy of the model.

(vi) Finally, we generate a classification report that includes precision, recall, F1-score,
and support for both ham and spam classes.

You might also like