Before we get into tokenization, let's first take a look at what spaCy is. spaCy is a popular library used in Natural Language Processing (NLP). It's an object-oriented library that helps with processing and analyzing text. We can use spaCy to clean and prepare text, break it into sentences and words and even extract useful information from the text using its various tools and functions. This makes spaCy a great tool for tasks like tokenization, part-of-speech tagging and named entity recognition.
What is Tokenization?
Tokenization is the process of splitting a text or a sentence into segments, which are called tokens. These tokens can be individual words, phrases, or characters depending on the tokenization method used. It is the first step of text preprocessing and is used as input for subsequent processes like text classification, lemmatization and part-of-speech tagging. This step is essential for converting unstructured text into a structured format that can be processed further for tasks such as sentiment analysis, named entity recognition and translation.
Example of Tokenization
This is the sentence: "I love natural language processing!"
After tokenization: ["I", "love", "natural", "language", "processing", "!"]
Each token here represents a word or punctuation mark, making it easier for algorithms to process and analyze the text.
Implementation of Tokenization using Spacy Library
Python
import spacy
# Creating blank language object then
# tokenizing words of the sentence
nlp = spacy.blank("en")
doc = nlp("GeeksforGeeks is a one stop\
learning destination for geeks.")
for token in doc:
print(token)
Output:
GeeksforGeeks
is
a
one
stop
learning
destination
for
geeks
.
We can also add functionality in tokens by adding other modules in the pipeline using spacy.load().
Python
nlp = spacy.load("en_core_web_sm")
nlp.pipe_names
Output:
['tok2vec', 'tagger', 'parser', 'attribute_ruler', 'lemmatizer', 'ner']
Here is an example to show what other functionalities can be enhanced by adding modules to the pipeline.
Python
import spacy
# loading modules to the pipeline.
nlp = spacy.load("en_core_web_sm")
# Initialising doc with a sentence.
doc = nlp("If you want to be an excellent programmer \
, be consistent to practice daily on GFG.")
# Using properties of token i.e. Part of Speech and Lemmatization
for token in doc:
print(token, " | ",
spacy.explain(token.pos_),
" | ", token.lemma_)
Output:
If | subordinating conjunction | if
you | pronoun | you
want | verb | want
to | particle | to
be | auxiliary | be
an | determiner | an
excellent | adjective | excellent
programmer | noun | programmer
, | punctuation | ,
be | auxiliary | be
consistent | adjective | consistent
to | particle | to
practice | verb | practice
daily | adverb | daily
on | adposition | on
GFG | proper noun | GFG
. | punctuation | .
In the example above, we utilized part-of-speech (POS) tagging and lemmatization through the spaCy NLP modules. This allowed us to obtain the POS for each word and convert each token to its base form through lemmatization. Prior to loading the NLP model with "en_core_web_sm", we would not have had access to this functionality. The en_core_web_sm model is essential as it provides the necessary linguistic features, such as tokenization, POS tagging and lemmatization, enabling these advanced NLP capabilities.
Read More:
Similar Reads
Natural Language Processing (NLP) - Overview Natural Language Processing (NLP) is a field that combines computer science, artificial intelligence and language studies. It helps computers understand, process and create human language in a way that makes sense and is useful. With the growing amount of text data from social media, websites and ot
9 min read
Natural Language Processing (NLP) Tutorial Natural Language Processing (NLP) is the branch of Artificial Intelligence (AI) that gives the ability to machine understand and process human languages. Human languages can be in the form of text or audio format.Applications of NLPThe applications of Natural Language Processing are as follows:Voice
5 min read
What is a Large Language Model (LLM) Large Language Models (LLMs) represent a breakthrough in artificial intelligence, employing neural network techniques with extensive parameters for advanced language processing.This article explores the evolution, architecture, applications, and challenges of LLMs, focusing on their impact in the fi
9 min read
Understanding TF-IDF (Term Frequency-Inverse Document Frequency) TF-IDF (Term Frequency-Inverse Document Frequency) is a statistical measure used in natural language processing and information retrieval to evaluate the importance of a word in a document relative to a collection of documents (corpus). Unlike simple word frequency, TF-IDF balances common and rare w
6 min read
Feedforward Neural Network Feedforward Neural Network (FNN) is a type of artificial neural network in which information flows in a single directionâfrom the input layer through hidden layers to the output layerâwithout loops or feedback. It is mainly used for pattern recognition tasks like image and speech classification.For
6 min read
Hidden Markov Model in Machine learning When working with sequences of data, we often face situations where we can't directly see the important factors that influence the datasets. Hidden Markov Models (HMM) help solve this problem by predicting these hidden factors based on the observable dataHidden Markov Model in Machine LearningIt is
10 min read
BERT Model - NLP BERT (Bidirectional Encoder Representations from Transformers) stands as an open-source machine learning framework designed for the natural language processing (NLP). Originating in 2018, this framework was crafted by researchers from Google AI Language. The article aims to explore the architecture,
14 min read
What is Retrieval-Augmented Generation (RAG) ? Retrieval-augmented generation (RAG) is an innovative approach in the field of natural language processing (NLP) that combines the strengths of retrieval-based and generation-based models to enhance the quality of generated text. Retrieval-Augmented Generation (RAG)Why is Retrieval-Augmented Generat
9 min read
Removing stop words with NLTK in Python In natural language processing (NLP), stopwords are frequently filtered out to enhance text analysis and computational efficiency. Eliminating stopwords can improve the accuracy and relevance of NLP tasks by drawing attention to the more important words, or content words. The article aims to explore
9 min read
Word Embedding using Word2Vec In this article, we are going to see Pre-trained Word embedding using Word2Vec in NLP models using Python. What is Word Embedding?Word Embedding is a language modeling technique for mapping words to vectors of real numbers. It represents words or phrases in vector space with several dimensions. Word
7 min read