Python | Word Similarity using spaCy
Last Updated :
19 Jul, 2019
Word similarity is a number between 0 to 1 which tells us how close two words are, semantically. This is done by finding similarity between word vectors in the vector space. spaCy, one of the fastest NLP libraries widely used today, provides a simple method for this task.
spaCy's Model -
spaCy supports two methods to find word similarity: using context-sensitive tensors, and using word vectors. Below is the code to download these models.
# Downloading the small model containing tensors.
python -m spacy download en_core_web_sm
# Downloading over 1 million word vectors.
python -m spacy download en_core_web_lg
Below is the code to find word similarity, which can be extended to sentences and documents.
Python
import spacy
nlp = spacy.load('en_core_web_md')
print("Enter two space-separated words")
words = input()
tokens = nlp(words)
for token in tokens:
# Printing the following attributes of each token.
# text: the word string, has_vector: if it contains
# a vector representation in the model,
# vector_norm: the algebraic norm of the vector,
# is_oov: if the word is out of vocabulary.
print(token.text, token.has_vector, token.vector_norm, token.is_oov)
token1, token2 = tokens[0], tokens[1]
print("Similarity:", token1.similarity(token2))
Output:
cat True 6.6808186 False
dog True 7.0336733 False
Similarity: 0.80168545
The 'en_core_web_md' model yields vectors of dimension 300*1 for 'dog' and 'cat'. One may also use the larger model, 'en_vectors_web_lg' which yields vectors of larger dimension for the same two words.
Using Custom Language Models -
By simply switching the language model, we can find a similarity between Latin, French or German documents. spaCy supports a total of 49 languages at present. spaCy also allows one to fix word vectors for words as per user need. Below is an example.
Python
import spacy
import numpy as np
from spacy.vocab import Vocab
nlp = spacy.load('en_core_web_md')
new_word = 'bucrest'
print('Before custom setting')
print(vocab.get_vector('bucrest'))
custom_vector = np.random.uniform(-1, 1, (300, ))
vocab.set_vector(new_word, custom_vector)
print('After custom setting')
print(vocab.get_vector('bucrest'))
Output:
Before custom setting
array([0., 0., 0., 0., 0., 0., 0., 0., --- ])
After custom setting
array([ 0.68106073, 0.6037007, 0.9526876, -0.25600302, -0.24049562, --- ])
Similar Reads
Get similar words suggestion using Enchant in Python For the given user input, get similar words through Enchant module. Enchant is a module in python which is used to check the spelling of a word, gives suggestions to correct words. Also, gives antonym and synonym of words. It checks whether a word exists in dictionary or not. Other dictionaries can
1 min read
Word Dictionary using Tkinter Prerequisite: TkinterPydictionary Python offers multiple options for developing GUI (Graphical User Interface). Out of all the GUI methods, tkinter is the most commonly used method. It is a standard Python interface to the Tk GUI toolkit shipped with Python. In this article, we will learn how to mak
2 min read
Correcting Words using NLTK in Python nltk stands for Natural Language Toolkit and is a powerful suite consisting of libraries and programs that can be used for statistical natural language processing. The libraries can implement tokenization, classification, parsing, stemming, tagging, semantic reasoning, etc. This toolkit can make mac
4 min read
Top K Nearest Words using Edit Distances in NLP We can find the Top K nearest matching words to the given query input word by using the concept of Edit/Levenstein distance. If any word is the same word as the query input(word) then their Edit distance would be zero and is a perfect match and so on. So we can find the Edit distances between the qu
3 min read
NLP | WuPalmer - WordNet Similarity How does Wu & Palmer Similarity work? It calculates relatedness by considering the depths of the two synsets in the WordNet taxonomies, along with the depth of the LCS (Least Common Subsumer).  The score can be 0 < score <= 1. The score can never be zero because the depth of the LCS is ne
2 min read