0% found this document useful (0 votes)
5 views

Ai 4

The document is a Python script for a simple chatbot that uses natural language processing techniques, including TF-IDF vectorization and cosine similarity, to respond to user queries. It includes functionalities for greeting users and processing their input to generate appropriate responses. The chatbot continues to interact until the user types 'bye'.

Uploaded by

nilimapawase14
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Ai 4

The document is a Python script for a simple chatbot that uses natural language processing techniques, including TF-IDF vectorization and cosine similarity, to respond to user queries. It includes functionalities for greeting users and processing their input to generate appropriate responses. The chatbot continues to interact until the user types 'bye'.

Uploaded by

nilimapawase14
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 2

import io

import random
import string
import warnings
import numpy as np
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
import warnings
warnings.filterwarnings('ignore')

pip install nltk

import nltk
from nltk.stem import WordNetLemmatizer
nltk.download('popular',quiet=True)

f=open('chatbot.txt','r',errors='ignore')
raw=f.read()
raw=raw.lower()

sent_tokens=nltk.sent_tokenize(raw)
word_tokens=nltk.sent_tokenize(raw)

lemmer=nltk.stem.WordNetLemmatizer()
def LemTokens(tokens):
return[lemmer.lemmatizer(token)for token in tokens]
remove_punt_dict=dict((ord(punct),None) for punct in string.punctuation)

def LemNormalize(text):
return LemTokens(nltk.word_tokenize(text.lower().translate(remove_punct_dist)))

GREETING_INPUTS=("hello","hi","greetings","sup","what's up","hey")
GREETING_RESPONSES=["hi","hey","*nods","hi there","hello","I am glad! You are
talking to me"]
def greeting(sentence):

for word in sentence.split():


if word.lower() in GREETING_INPUTS:
return random.choice(GREETING_RESPONSES)

def response(user_response):
robo_response=''
sent_tokens.append(user_response)
TfidVec=TfidVectorizer(tokenizer=LemNormalize,stop_words='english')
tfidf=TfidVec.fit_transform(sent_tokens)
vals=cosine_similarity([-1],tfidf)
idx=vals.argsort()[0][-2]
flat=vals.flatten()
flat.sort()
req_tfidf=flat[-2]
if(req_tfidf==0):
robo_response=robo_response+"I am sorry! I don't understand you"
return robo_response

flag=True
print("ROBO: My name is Robo.I will answer your queries about Chatbots.If you want
to exit,type Bye!")
while(flag==True):
user_response=input()
user_response=user_response.lower()
if(user_response!='bye'):
if(user_response=='thanks' or user_response=='thank you'):
flag=False
print("ROBO: You are welcome..")
else:
if(greeting(user_response)!=None):
print("ROBO:"+greeting(user_response))
else:
print("ROBO:",ends="")
print(response(user_response))
sent_tokens.remove(user_response)
else:
flag=False
print("ROBO: Bye! take care..")

You might also like