# Downloading the small model containing tensors.
python -m spacy download en_core_web_sm
# Downloading over 1 million word vectors.
python -m spacy download en_core_web_lg
Below is the code to find word similarity, which can be extended to sentences and documents.
import spacy
nlp = spacy.load( 'en_core_web_md' )
print ( "Enter two space-separated words" )
words = input ()
tokens = nlp(words)
for token in tokens:
print (token.text, token.has_vector, token.vector_norm, token.is_oov)
token1, token2 = tokens[ 0 ], tokens[ 1 ]
print ( "Similarity:" , token1.similarity(token2))
|
Output:
cat True 6.6808186 False
dog True 7.0336733 False
Similarity: 0.80168545
The ‘en_core_web_md’ model yields vectors of dimension 300*1 for ‘dog’ and ‘cat’. One may also use the larger model, ‘en_vectors_web_lg’ which yields vectors of larger dimension for the same two words.
Using Custom Language Models –
By simply switching the language model, we can find a similarity between Latin, French or German documents. spaCy supports a total of 49 languages at present. spaCy also allows one to fix word vectors for words as per user need. Below is an example.
import spacy
import numpy as np
from spacy.vocab import Vocab
nlp = spacy.load( 'en_core_web_md' )
new_word = 'bucrest'
print ( 'Before custom setting' )
print (vocab.get_vector( 'bucrest' ))
custom_vector = np.random.uniform( - 1 , 1 , ( 300 , ))
vocab.set_vector(new_word, custom_vector)
print ( 'After custom setting' )
print (vocab.get_vector( 'bucrest' ))
|
Output:
Before custom setting
array([0., 0., 0., 0., 0., 0., 0., 0., --- ])
After custom setting
array([ 0.68106073, 0.6037007, 0.9526876, -0.25600302, -0.24049562, --- ])