PDL Lab 4
PDL Lab 4
A PROJECT REPORT
Submitted by
BACHELOR OF TECHNOLOGY
IN
ARTIFICIAL INTELLIGENCE AND DATA SCIENCE
JANUARY-MAY 2023-2024
BONAFIDE CERTIFICATE
SIGNATURE SIGNATURE
Dr. B PRATHUSHA LAKSHMI MS. AKILA A
PROFESSOR AND HEAD ASST. PROF &
SUPERVISOR
Department of AI-DS Department of AI-DS
R.M.K College of Engg. and Tech, R.M.K College of Engg.
and Tech
Puduvoyal – 601 026. Puduvoyal – 601 026.
Submitted for the product development lab held on 16.04.2024 at R.M.K College
of Engineering and Technology, Puduvoyal, Tiruvallur District - 601206
ii
CERTIFICATE OF EVALUATION
Semester 04
INTERNAL EXAMINER
iii
ACKNOWLEDGEMENT
We earnestly portray our sincere gratitude and regards to our beloved Founder
Chairman Thiru. R. S. MUNIRATHINAM for giving us the infrastructure
for conducting the project work and our Chairperson Tmt. MANJULA
MUNIRATHINAM for her blessings. We also thank our
Vice Chairman, Thiru. R. M. KISHORE and our Director, Thiru. R.
JOTHI NAIDU, for their constant support and affection shown towards us
throughout the course.
We are extremely thankful to our Principal, Dr. K.RAMAR, for being the
source of inspiration in this college.
We reveal our sincere thanks to our Professor and Head of the Department,
Artificial Intelligence and Data Science, Dr. B. PRATHUSHA LAXMI, for
her commendable support and encouragement for the completion of our
project.
We wish to record our thanks to our project supervisor Ms. AKILA A for her
valuable guidance and support during each stage of our project.
We take this opportunity to extend our thanks to all the faculty members of
Department of Artificial Intelligence and Data Science, parents and friends for
their care and support during the crucial times of the completion of our project.
iv
ABSTRACT
The extensive spread of fake news has the potential for extremely negative impacts on individuals
and society. Therefore, fake news detection on social media has recently become an emerging
research that is attracting tremendous attention. Fake news detection on social media presents unique
characteristics and challenges that make existing detection algorithms from traditional news media
ineffective or not applicable. First, fake news is intentionally written to mislead readers to believe
false information, which makes it difficult and nontrivial to detect based on news content; therefore,
we need to include auxiliary information, such as user social engagements on social media, to help
make a determination. Second, exploiting this auxiliary information is challenging in and of itself as
users’ social engagements with fake news produce data that is big, incomplete, unstructured, and
noisy. Because the issue of fake news detection on social media is both challenging and relevant, we
conducted this survey to further facilitate research on the problem
5
List of Figures Pg
No.
Figure 1: Existing system 3
Figure 6: Input 10
Figure 7: Output 11
6
List Of Abbreviations:
AI - Artificial Intelligence
ML - Machine Learning
NLP - Natural Language Processing
SVM - Support Vector Machine
RF - Random Forest
DT - Decision Tree
CNN - Convolutional Neural Network
LSTM - Long Short-Term Memory
BERT - Bidirectional Encoder Representations from Transformers
RNN - Recurrent Neural Network
TF-IDF - Term Frequency-Inverse Document Frequency
API - Application Programming Interface
URL - Uniform Resource Locator
HTML - Hypertext Markup Language
CSS - Cascading Style Sheets
7
TABLE OF CONTENTS
TITLE PAGE
1. INTRODUCTION 1
2. LITERATURE SURVEY 2
3. PROJECT DESCRIPTION 3
3.3.1 ECONOMIC 4
3.3.2 TECHNICAL 4
3.3.3 SOCIAL 4
4. MODEL DESCRIPTION 6
8
4.4 ADVANTAGES OF PROPOSED SYSTEM 8
5.2 TESTING 10
7.1 CONCLUSION 17
REFERENCES 18
9
10
1
1.INTRODUCTION
As an increasing amount of our lives is spent interacting online through social media
platforms, more and more people tend to seek out and consume news from social media rather
than traditional news organizations. The reasons for this change in consumption behaviors is
inherent in the nature of these social media platforms. Fake news is usually manipulated by
propagandists to convey political messages or influence. Detecting fake news on social media
poses several new and challenging research problems.
Though fake news itself is not a new problem–nations or groups have been using the news
media to execute propaganda or influence operations for centuries–the rise of web-generated
news on social media makes fake news a more powerful force that challenges traditional
journalistic norms. There are several characteristics of this problem that make it uniquely
challenging for automated detection. Fake news is usually related to newly emerging, time-
critical events, which may not have been properly verified by existing knowledge bases due to
the lack of corroborating evidence or claims. In addition, users’ social engagements with fake
news produce data that is big, incomplete, unstructured, and noisy.
The increased scholarly focus has been directed to fake news detection given their wide-
spread impact on supply chain disruptions, as was the case with the COVID-19 vaccine. Fake
news and misinformation are highly disruptive, which create uncertainty and disruptions not
only in society but also in business operations. Fake news and disinformation related problems
are exacerbated due to the rise of social media sites. Regarding this, using artificial intelligence
(AI) to counteract the spread of false information is vital in acting against disruptive effects. It
has been observed that fake news and disinformation (FNaD) harm supply chains and make
their operation unsustainable (Churchill,2018).
1
2. LITERATURE SURVEY
Fake news detection is made to stop the rumors that are being spread through the
various platforms whether it be social media or messaging platforms, this is done to stop spreading
fake news which leads to activities like mob lynching, this has been a great reason motivating us
to work on this project. We have been continuously seeing various news of mob lynching that
leads to the murder of an individual; fake news detection works on the objective of detecting this
fake news and stopping activities like this thereby protecting the society from these unwanted acts
of violence. New authenticator follows some steps to check whether the news is true or false. It
will compare news which is given by our side with different websites and various news sources
if that news is found on any news website then it shows the given news is true, else it shows there
has been no such news in last few days. This can help us from fake news. These days‟ fake news
spread very fast because of social media and the internet. So, news authenticator helps us to detect
either the given news is fake or real.
The main objective is to detect the fake news, which is a classic text classification
problem with a straight forward proposition. It is needed to build a model that can differentiate
between “Real” news and “Fake” news. This leads to consequences in social networking sites like
Facebook, Instagram, microblogging sites like Twitter and instant messaging applications like
WhatsApp, Hike where these fake news gets a major boost and gets viral among people, around
the country and globe. The proposed system helps to find the authenticity of the news. If the news
is not real, then the user is suggested with the relevant news article. News suggestion suggests
recent news and suggests the news related to the news which the user has given for authentication.
If the news is fake, then this news suggestion gives the related news on that topic. The news
suggestion suggests the news based on keywords which you give in your news which you wish
to authenticate.
2
3. PROJECT DESCRIPTION
3.1 Existing System
3
3.3 Feasibility Study
The development of a Deep Learning-based fake news detection presents a promising
opportunity for a feasibility assessment that can bring positive impacts across economic, social,
and technological domains.
3.3.1 Economic
Facebook in an article quoted they are working to fight the spread of false news in two key
areas. First is disrupting economic incentives because of most false news in financially motivated.
Second one is, Building new products to curb the spread of false news. Some of the preventive
measures taken by Facebook are mentioned here of their Ranking Improvements: News Feed ranks
reduce the prevalence of false news content. Easier Reporting: Determine what is valuable and
what is not. Stories that are flagged as false by our community than might show up lower in the
user feed
.
3.3.2 Technical
Once we have our dataset, we preprocess the text data. This involves tasks such as tokenization,
removing stop words, and converting the text into a numerical representation that machine
learning algorithms can understand. For this purpose, we're using TF-IDF (Term Frequency-
Inverse Document Frequency) vectorization, which converts text documents into numerical
vectors based on the importance of each word in the document relative to the entire corpus
Fake news detection. Most previously mentioned approaches focus on extracting various
features, incorporating these features into supervised classification models, such as naive
Bayes, decision tree, logistic regression, k nearest neighbor (KNN), and support vector
machines (SVM), and then selecting the classifier that performs the best.
More research can be done to build more complex and effective models and to better utilize
extracted features, such as aggregation methods, probabilistic methods, ensemble methods, or
projection methods. Specifically, we think there is some promising research in the following
directions.
3.3.3 Social
In addition to features related directly to the content of the news articles, additional social
context features can also be derived from the user-driven social engagements of news consumption
on social media platform. Social engagements represent the news proliferation process over time,
which provides useful auxiliary information to infer the veracity of news articles. Note that few
papers exist in the literature that detect fake news using social context features.
. However, because we believe this is a critical aspect of successful fake news detection,
we introduce a set of common features utilized in similar research areas, such as rumor veracity
classification on social media. Generally, there are three major aspects of the social media context
that we want to represent: users, generated posts, and networks. Below, we investigate how we can
extract and represent social context features from these three aspects to support fake news detection
4
3.4 System Specifications Software Requirements:
1. Programming Language - Python
2. Integrated Development Environment- Jupyter notebook, google collab
3. Data Preprocessing and Analysis tools- Pandas, Numpy, Matplotlib Hardware
Requirements:
1. RAM – 16GB
2. Processor – Ryzen 7
3. Speed: 2.6 GHz
4. Hard Disk: 100 GB
5
4 MODEL DESCRIPTION
4.1 General Architecture
CNN hyperparameter optimization involves a meta-heuristic algorithm iteratively refining
parameters based on a fitness function applied to normalized images fed into the encapsulated CNN
architecture until meeting the stop criterion.
6
Figure 4: Data Flow
4.2.2 Block Diagram
This section compares Blockchain technology with traditional databases. Blockchain can be
implemented as an alternative to a secure database, and it has been used to save all feedback and source evaluations.
In recent years, the Blockchain has gained massive popularity in many research areas, with several approaches
proposed for using it to combat fake news and provide a transparent and secure environment
7
4.3 Model Description:
Detecting fake news is a layered process that involves analysis of the news contents to
determine the truthfulness of the news. The news could contain information in various formats
such as text, video, image, etc. Combinations of different types of data make the detection
process difficult.
In addition, raw data collected is always expected to be unstructured and contain missing
values in the data. As fake news produces big, incomplete, unstructured, and noisy data, raw data
pre-processing is extremely important to clean and structure the data before feeding it into
detection models. Thereby, fake news creators use many new ideas to make their false creations
successful, one of which is to stimulate the emotions of the beneficiaries.
6.3 Advantages of Proposed System
Global Coverage: AI and ML-based fake news detection systems can operate across different
languages and regions, providing a global perspective on the spread of misinformation and
enabling interventions on a large scale.
High Accuracy: With proper training and optimization, AI and ML models can achieve high
levels of accuracy in identifying fake news, outperforming traditional rule-based or manual
approaches.
Multimodal Analysis: AI and ML techniques can analyze various types of data, including text,
images, and videos, enabling more comprehensive detection of fake news that may utilize
different media formats to deceive audiences.
High Accuracy: With proper training and optimization, AI and ML models can achieve high
levels of accuracy in identifying fake news, outperforming traditional rule-based or manual
approaches.
Adaptability: ML models can continuously learn and adapt to new tactics employed by
purveyors of fake news, ensuring that detection mechanisms remain effective even as
misinformation evolves.
Scalability: AI and ML algorithms can handle vast amounts of data, making them suitable for
analyzing the sheer volume of information available online, which is crucial for identifying
patterns indicative of fake news.
8
9
5. IMPLEMENTATION AND TESTING
5.1 Input and Output
Figure 6: Input
Input Description: Fake news detection utilizing AI and ML involves a systematic process of
gathering textual data from diverse sources such as news websites and social media platforms. This
data is then subjected to thorough preprocessing, including tokenization, normalization, and feature
extraction to transform the text into a format suitable for analysis. Machine learning models,
ranging from traditional algorithms to deep learning architectures, are trained on labelled datasets
to discern patterns distinguishing fake news from genuine ones. Throughout model development,
evaluation metrics like accuracy and F1-score are used to assess performance and fine-tune the
model for optimal results.
Once trained, the model is deployed into production environments, where it
continuously evaluates incoming news articles, providing users with probabilistic assessments of
their credibility. This iterative process, encompassing data collection, preprocessing, model
training, evaluation, and deployment, enables AI and ML to play a vital role in combating the
proliferation of misinformation and promoting informed decision-making in the digital age.
10
Figure 7: Output
Output Description: Fake news detection employing AI and ML entails a sophisticated approach
to analyzing textual content sourced from various platforms like news websites and social media.
Through extensive preprocessing, which includes tasks like tokenization and normalization, the
text is transformed into a structured format amenable to computational analysis. Leveraging
machine learning algorithms, ranging from classical models to advanced deep learning
architectures, the system learns intricate patterns that distinguish between authentic and misleading
information. Rigorous evaluation using metrics like accuracy and precision ensures the model's
efficacy before deployment. Once operational, the system continuously scrutinizes incoming news
articles, assigning credibility scores that aid users in discerning trustworthy sources from deceptive
ones.
5.2 Testing
Dataset Splitting: The labeled dataset used for training is typically divided into three subsets:
training data (used for model training), validation data (used for hyperparameter tuning), and
test data (used for evaluating the final model). The test data, which comprises unseen examples,
is crucial for assessing the model's generalization capabilities.
Model Evaluation Metrics: Various evaluation metrics are employed to quantify the
performance of the model. Common metrics for binary classification tasks like fake news
detection include accuracy, precision, recall, F1-score, and area under the ROC curve (AUC-
ROC). These metrics provide insights into different aspects of the model's performance, such
as its ability to correctly classify fake and genuine news and its capability to minimize false
positives and false negatives.
11
Cross-Validation: In addition to traditional train-test splits, techniques like k-fold cross-
validation may be employed to validate the model's performance across multiple folds of the
dataset. This approach helps mitigate issues related to data variability and ensures that the
model's performance estimates are more reliable.
Iterative Refinement: Based on the results of the testing phase, the model may undergo
iterative refinement, which involves adjusting hyperparameters, feature engineering strategies,
or even selecting different algorithms to enhance performance. This iterative process continues
until satisfactory performance is achieved on the test data.
Final Model Selection: Once the testing phase is complete, and the model's performance meets
the desired criteria, the final trained model is selected for deployment in production
environments, where it can be used to detect fake news in real-time.
5.3 Type of Testing
Unit Testing: Unit testing involves testing individual components or modules of the fake news
detection system in isolation. This ensures that each part of the system performs as expected
independently. In the context of AI and ML, unit testing may involve testing individual
functions responsible for data preprocessing, feature extraction, or model training.
Integration Testing: Integration testing focuses on verifying the interactions and
interoperability between different components of the system. This includes testing how well
data preprocessing modules integrate with feature extraction modules and how well the trained
model integrates with the overall system architecture.
Validation Testing: Validation testing aims to validate that the system meets the specified
requirements and objectives. In the context of fake news detection, validation testing ensures
that the system accurately distinguishes between fake and genuine news according to predefined
criteria.
Performance Testing: Performance testing assesses the system's ability to handle varying
workloads and perform tasks within acceptable time frames. In the case of fake news detection,
performance testing may involve evaluating the speed and efficiency of the system in
processing and analyzing large volumes of textual data.
Stress Testing: Stress testing evaluates the system's stability and robustness under extreme
conditions, such as high data volumes or concurrent user requests. This type of testing helps
identify potential bottlenecks or failure points in the system and ensures it can operate reliably
under challenging circumstances.
User Acceptance Testing (UAT): User acceptance testing involves testing the system with end-
users to ensure it meets their needs and expectations. In the context of fake news detection, UAT
may involve presenting users with fake and genuine news articles and gathering feedback on
the system's performance and usability.
5.4 Testing and Strategy
Testing and strategy for fake news detection using AI and ML algorithms involves a
meticulous approach aimed at ensuring the reliability and effectiveness of the system. Initially,
diverse datasets comprising fake and genuine news articles are collected and preprocessed to
12
prepare them for analysis. During model training, hyperparameters are tuned, and various
evaluation metrics are employed to assess the model's performance.
Robustness and generalization tests are conducted to verify the model's ability to
handle variations in input data, resist adversarial attacks, and generalize well to unseen domains.
Additionally, performance and scalability testing ensure the system meets real-time processing
requirements and can handle increasing data volumes. User acceptance testing solicits feedback
from end-users to enhance the system's usability and effectiveness.
Compliance and ethical considerations are also addressed through fairness testing
and ethical impact assessments. By employing this comprehensive testing strategy, developers
can validate the efficacy and reliability of fake news detection systems, thereby contributing to
the fight against misinformation in the digital age.
13
6. RESULTS AND DISCUSSION
6.1 Efficiency of Proposed Model
The efficiency of a proposed model for fake news detection using AI and ML hinges
on several critical factors. Firstly, the quality and diversity of the data utilized for training are
paramount. Biased or insufficient datasets can severely limit the model's effectiveness. Equally
important is the selection of relevant features that accurately represent the characteristics of
fake news, encompassing linguistic patterns, source credibility, and social network analysis.
The choice of machine learning algorithms, such as Naive Bayes, Support Vector
Machines, or Neural Networks, significantly influences the model's performance. Additionally,
the training methodology, including data preprocessing, feature engineering, and model
validation, plays a crucial role. Real-time processing capability is essential, particularly for
applications in social media platforms where misinformation spreads rapidly.
6.2 Comparison of Existing and Proposed Model
When comparing existing models with a proposed one for fake news detection using
AI and ML, several key factors come into play. Existing models have likely established a
baseline in terms of accuracy, feature selection, algorithmic approach, and scalability. The
proposed model must be evaluated against these benchmarks to determine its potential
improvements. This involves assessing whether the proposed model incorporates novel features
or algorithms that enhance detection accuracy beyond what existing models achieve.
Moreover, considerations such as data utilization, scalability, real-time processing
capabilities, robustness against adversarial attacks, and ethical implications must be weighed
against those of existing models. Additionally, user interface design and interpretability of results
are crucial for user acceptance. Ultimately, the proposed model should demonstrate superior
performance in terms of accuracy, scalability, robustness, and ethical considerations to justify its
adoption over existing solutions for fake news detection using AI and ML.
14
6.4 Sample Code
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
import re
import string
df_fake=pd.read_csv(r"C:\Users\Admin\Downloads\Fake_news_detection\Fake.csv")
df_true=pd.read_csv(r"C:\Users\Admin\Downloads\Fake_news_detection\True.csv")
df_fake.head(10)
df_true.head(10)
df_fake["class"] = 0
df_true["class"] = 1
df_fake.shape,df_true.shape
df_fake_manual_testing=df_fake.tail(10)
for i in range(23480,23470,-1):
df_fake.drop([i],axis=0,inplace=True)
df_true_manual_testing=df_true.tail(10)
for i in range(21416,21406, -1):
df_true.drop([i],axis=0,inplace=True)
df_manual_testing=pd.concat([df_fake_manual_testing,df_true_manual_testing],axis=0)
df_manual_testing.to_csv("manual_testing.csv")
15
df_marge=pd.concat([df_fake,df_true],axis=0)
df_marge.head(10)
df=df.sample(frac=1)
df.head(10)
df.isnull().sum()
def word_drop(text):
text=text.lower()
text=re.sub('\[.*?\]','',text)
text=re.sub("\\W"," ",text)
text=re.sub('https?://\S+|www\.\S+', '',text)
text=re.sub('<.*?>+', '',text)
text=re.sub('[%s]' % re.escape(string.punctuation), '', text)
text=re.sub('\n', '',text)
text=re.sub('\w*\d\w*', '', text)
return text
df["text"]=df["text"].apply(word_drop)
GBC.score(xv_test,y_test)
x=df["text"]
y=df["class"]
16
x_train, x_test, y_tarin, y_test=train_test_split(x,y, test_size= .25)
vectorization=TfidfVectorizer()
xv_train=vectorization.fit_transform(x_train)
xv_test=vectorization.transform(x_test)
LR=LogisticRegression()
LR.fit(xv_train,y_tarin)
LR.score(xv_test,y_test)
pred_LR=LR.predict(xv_test)
print(classification_report(y_test,pred_LR))
DT=DecisionTreeClassifier()
DT.fit(xv_train,y_tarin)
DT.score(xv_test,y_test)
pred_DT=DT.predict(xv_test)
print(classification_report(y_test,pred_DT))
17
from sklearn.ensemble import GradientBoostingClassifier
GBC=GradientBoostingClassifier(random_state=0)
GBC.fit(xv_train,y_tarin)
GBC.score(xv_test,y_test)
pred_GBC = GBC.predict(xv_test)
print(classification_report(y_test,pred_GBC))
RFC=RandomForestClassifier(random_state=0)
RFC.fit(xv_train,y_tarin)
RFC.score(xv_test, y_test)
pred_RFC=RFC.predict(xv_test)
print(classification_report(y_test,pred_RFC))
def output_lable(n):
if n==0:
return "Fake News"
elif n==1:
return "Not A Fake News"
def manual_testing(news):
testing_news={"text":[news]}
new_def_test=pd.DataFrame(testing_news)
18
new_def_test["text"]=new_def_test["text"].apply(word_drop)
new_x_test=new_def_test["text"]
new_xv_test=vectorization.transform(new_x_test)
pred_LR=LR.predict(new_xv_test)
pred_DT=DT.predict(new_xv_test)
pred_GBC=GBC.predict(new_xv_test)
pred_RFC=RFC.predict(new_xv_test)
news=str(input())
manual_testing(news)
7.1 Conclusion
In conclusion, the integration of AI and ML in the detection of fake news represents a significant
advancement in combating the proliferation of misinformation. These technologies offer
promising results in analyzing vast amounts of data, identifying patterns, and assessing the
credibility of sources at a scale and speed beyond human capability. However, challenges
persist, including the need for continually evolving algorithms to keep pace with the dynamic
nature of fake news, the requirement for robust labeled datasets for training, and the imperative
to address potential biases inherent in algorithmic decision-making. Moreover, the ethical
implications of deploying AI in this context, such as concerns surrounding privacy, freedom of
speech, and algorithmic fairness, must be carefully considered and mitigated. Moving forward,
interdisciplinary collaboration between experts in computer science, journalism, psychology,
19
and ethics is essential to develop comprehensive solutions. Additionally, education and
awareness initiatives are crucial for empowering individuals to critically evaluate information
and recognize misinformation independently. Ultimately, while AI and ML hold significant
promise in the fight against fake news, they should be viewed as part of a broader strategy that
emphasizes continuous improvement, ethical considerations, and the promotion of media
literacy.
7.2 Future Enhancement
Future enhancements in fake news detection using AI and ML hold the promise of significantly
advancing our ability to combat the spread of misinformation. One avenue of improvement lies in the
development of more sophisticated deep learning architectures, such as convolutional neural networks
(CNNs) and recurrent neural networks (RNNs), capable of capturing intricate patterns across various
modalities, including text, images, and videos. Additionally, exploring techniques for multi-modal
analysis could enable detection systems to better discern fake news across different media types.
Enhancements in explainability and interpretability will be crucial for building trust in these AI models,
as users will need to understand how decisions are made. Adversarial robustness is another area ripe for
improvement, ensuring that detection systems remain effective in the face of evolving adversarial
attacks.
20
REFERENCES
1. Qureshi KA, Malick RAS, Sabih M, Cherifi H (2021) Complex network and source inspired
Covid-19 fake news classification on Twitter. IEEE Access 9:139636–139656. https://doi.org/
10.1109/ACCESS.2021.3119404
2. Gerken T (2022) Google to run ads educating users about fake news. BBC. https://www.bbc.
com/news/technology-62644550
3. Tokyo Olympics (2021) Organisers say summer games cancellation report fake news-
sports news, Firstpost. https://www.firstpost.com/sports/tokyo-olympics-2020-organisers-
say-summer-games-cancellation-report-fake-news-9193521.html
4. Fake news is clouding the real stories around the Ukrainian crisis—here’s how to
spot it (2022). https://www.weforum.org/agenda/2022/03/fake-viral-footage-is-spreading-
alongside-the-real-horror-in-ukraine-here-are-5-ways-to-spot-it/
5. Fake alert: will govt monitor your WHATSAPP CHATS? here’s the truth (2022). https://
economictimes.indiatimes.com/news/new-updates/fake-alert-will-govt-monitor-your-
whatsapp-chats-heres-the-truth/articleshow/93712093.cms
6. Mahrishi M, Morwal S, Muzaffar AW, Bhatia S, Dadheech P, Rahmani MKI (2021) Video
index point detection and extraction framework using custom yolov4 darknet object detection
model. IEEE Access 9:143378–143391. https://doi.org/10.1109/ACCESS.2021.3118048
7. Kumar Mahrishi M, Meena G (2022) A comprehensive review of recent automatic speech
summarization and keyword identification techniques. Springer, Cham, pp 111–126
8. Dahouda MK, Joe I (2021) A deep-learned embedding technique for categorical features
encoding. IEEE Access 9:114381–114391. https://doi.org/10.1109/ACCESS.2021.3104357
9. Alhawarat M, Aseeri AO (2020) A superior Arabic text categorization deep model
(SATCDM).IEEE Access 8:24653–24661
10. Jiang T, Li JP, Haq AU, Saboor A, Ali A (2021) A novel stacking approach for accurate
detection of fake news. IEEE Access 9:22626–22639
11. Almars AM, Almaliki M, Noor TH, Alwateer MM, Atlam E (2022) Hann: Hybrid attention
neural network for detecting Covid-19 related rumors. IEEE Access 10:12334–12344. https://
doi.org/10.1109/ACCESS.2022.3146712
21