0% found this document useful (0 votes)
20 views31 pages

Industrial Report Me

Uploaded by

ayushchausali004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views31 pages

Industrial Report Me

Uploaded by

ayushchausali004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

Industrial Training Report

On
Data Analytics with Python

Done by

Sofcon India Pvt. Ltd

(Sec 3, Noida)
Submitted in partially fulfillment of the requirements
for the award of diploma in Information Technology

DIPLOMA IN INFORMATION TECHNOLOGY


GOVT. POLYTECHNIC SRINAGAR GARHWAL
UTTARAKHAND – INDIA
Session 2024
(Summer)

SUBMITTED TO SUBMITTED BY

DEPARTMENT OF INFORMATION NAME : Piyush Singh Rawat


TECHNOLOGY BRANCH: I.T
ROLL NO: 22014120013
YEAR / SEM : 3rd year / 5th sem
DECLARATION
I hereby declare that the industrial training report entitled “Data Analytics with
Python” done at “Sofcon India Pvt. Ltd (Sec. 3, Noida).” submitted by me, for the
award of the Diploma in Information Technology to Government Polytechnic
Srinagar (Garhwal) (UK) is a record of bonafide work carried out at Dehradun
between 13-07-2024 and 12-08-2024.

I confirm that this report has not been copied or reproduced from any other source,
and all references used have been appropriately cited wherever required. The
experiences and insights documented in this report reflect my personal learning
journey and understanding of Data Analytics with Python programming .

I take full responsibility for the authenticity and accuracy of the information provided
and assure that this report adheres to the guidelines and standards set forth by the
training institution.
CERTIFICATES
Industrial Training Letter Cum certificate :
ACKNOWLEDGEMENT

It is my proud privilege and duty to acknowledge the kind of help and guidance received
from several people in preparation of this report. It would not have been possible to
prepare this report in this form without their valuable help, cooperation and guidance.
First and foremost, I wish to record our sincere gratitude to slog solutions for their
constant support and encouragement in preparation of this report and for making
available videos and interface facilities needed to prepare this report. The seminar on
“Data Analytics with python” was very helpful to us in giving the necessary
background information and inspiration in choosing this topic for the seminar. Their
contributions and technical support in preparing this report are greatly acknowledged.
Last but not the least, we wish to thank our parents for financing our studies in this
college as well as for constantly encouraging us to learn engineering. Their personal
sacrifice in providing this opportunity to learn engineering is gratefully acknowledged.
ABSTRACT
This industrial training report highlights the application of data analytics using Python
to solve real-world business challenges. The training aimed to provide hands-on
experience in extracting, analyzing, and visualizing data to derive actionable insights.
Key aspects of the training included data preprocessing, exploratory data analysis
(EDA), and the implementation of statistical and machine learning models.

Throughout the training, various Python libraries such as Pandas, NumPy, Matplotlib
and Seaborn were extensively utilized to process and analyze datasets from diverse
domains, including finance, healthcare, and marketing. Specific techniques, such as data
cleaning, feature engineering, and predictive modeling, were employed to ensure the
accuracy and reliability of insights.

Additionally, the training emphasized the importance of data storytelling through


visualizations and dashboards, enabling effective communication of findings to
stakeholders. Projects undertaken during the training demonstrated proficiency in
tackling real-world data challenges, such as customer segmentation, trend forecasting,
and anomaly detection.

This report concludes by reflecting on the skills acquired and the practical applications
of Python in data analytics. The experience has not only strengthened technical
competencies but also enhanced problem-solving and critical-thinking abilities,
preparing the participant for a professional career in data analytics.
TABLE OF CONTENT

 INTRODUCTION TO PYTHON……………………………………………
 PYTHON LANGUAGE ESSENTIAL………………………………………
 INTRODUCTION TO DATA SCEINCE………………………………….
 NUMPY………………………………………………………………………………
 DATA VISUALIZATION USING MATPLOTLIB AND
PANDAS………………………………………………………………………………
 BASICS OF TABLEAU………………………………………….....................
 WORKING WITH DATABASE(SQL)…………………………………….
 BASICS OF POWER BI ………………………………………………………
 CONCLUSION…………………………………………………………………...
CHAPTER 1
INTRODUCTION TO PYTHON

Python is a versatile, high-level, and interpreted programming language that has gained immense
popularity in the tech industry for its simplicity, readability, and a vast ecosystem of libraries.
Created by Guido van Rossum in the late 1980s and officially released in 1991, Python was designed
to emphasize code readability and reduce the cost of program maintenance. Its name was inspired by
Monty Python’s Flying Circus, reflecting its creator’s sense of humor.

Over the years, Python has become one of the most widely used programming languages globally,
with applications ranging from web development to artificial intelligence, making it a preferred
choice for developers, researchers, and data scientists. This introduction will explore the language's
features, history, applications, and benefits.

A Brief History of Python


Python’s journey began in December 1989 when Guido van Rossum started working on a new
language at Centrum Wiskunde & Informatica (CWI) in the Netherlands. Dissatisfied with the
limitations of the ABC programming language, which he had previously worked on, van Rossum
sought to develop a language that was simple yet powerful.

The first version, Python 1.0, was released in February 1991. It included key features like exception
handling, functions, and modules. Over the years, Python evolved significantly, with Python 2.0
introduced in 2000 and Python 3.0 in 2008. Python 3 marked a significant shift with backward-
incompatible changes but was designed to fix fundamental flaws in earlier versions. Today, Python is
maintained by the Python Software Foundation (PSF) and enjoys a robust and active community.

Features of Python
Python’s design philosophy centers around code simplicity and readability. Below are some of its
distinguishing features:

1. Simple Syntax
Python’s syntax is clear and concise, making it an ideal language for beginners and
professionals alike. It uses indentation instead of braces or keywords to define code blocks,
improving code readability.
2. Interpreted Language
Python executes code line by line, enabling immediate feedback and debugging during the
development process.
3. Dynamic Typing
Unlike statically typed languages, Python does not require explicit type declarations for
variables, making it more flexible and developer-friendly.
4. Extensive Standard Library
Python’s standard library includes modules for tasks like file handling, database
management, and internet protocols, reducing the need for additional code.
5. Platform Independence
Python code can run on various operating systems, including Windows, macOS, and Linux,
without requiring modifications.
6. Open Source and Community Support
Python is free to use and distribute. Its vast community ensures continuous development,
extensive documentation, and support for developers.
7. Scalability and Versatility
Python supports object-oriented, procedural, and functional programming paradigms, making
it suitable for small scripts as well as large-scale enterprise applications.

Applications of Python
Python’s versatility enables it to excel in numerous fields:

1. Web Development
Frameworks like Django and Flask allow developers to build robust and scalable web
applications. These frameworks simplify complex tasks like URL routing, database
interactions, and HTML rendering.
2. Data Science and Machine Learning
Python is the preferred language for data analysis and machine learning. Libraries like
Pandas, NumPy, and Matplotlib facilitate data manipulation and visualization, while
TensorFlow, Keras, and PyTorch enable the creation of sophisticated AI models.
3. Scientific Computing
Python is widely used in research and academia for simulations, numerical computations, and
statistical analysis. Libraries like SciPy and SymPy cater specifically to these needs.
4. Automation and Scripting
Python is ideal for automating repetitive tasks, such as data scraping, file management, and
testing. Tools like Selenium and BeautifulSoup enhance its automation capabilities.
5. Game Development
Game developers use Python to create games or game prototypes. Libraries like Pygame
provide tools for handling graphics, sound, and input devices.
6. Embedded Systems
Python is used in IoT and embedded systems to write scripts for devices like Raspberry Pi,
enabling hobbyists and professionals to develop innovative hardware solutions.
7. Finance and FinTech
Python is extensively used for quantitative analysis, financial modeling, and risk
management. Libraries like QuantLib and PyAlgoTrade simplify financial computations.
8. Cybersecurity and Ethical Hacking
Python is a powerful tool for penetration testing, malware analysis, and network scanning.
Tools like Scapy and PyCrypto are widely used in cybersecurity applications.

Benefits of Learning Python


Python’s popularity is largely attributed to the advantages it offers:

1. Ease of Learning
Python’s intuitive syntax makes it accessible to beginners, while its extensive libraries and
frameworks cater to experienced developers.
2. Rapid Development
Python’s simplicity allows for faster prototyping and deployment, making it a go-to language
for startups and agile development teams.
3. Integration Capabilities
Python integrates seamlessly with other languages and technologies, such as C/C++, Java,
and .NET, enabling developers to leverage existing codebases.
4. Job Market Demand
Python’s widespread adoption across industries has created a high demand for Python
developers, offering lucrative career opportunities.
5. Future-Ready Language
Python’s role in emerging technologies like AI, IoT, and blockchain ensures its relevance for
years to come.

Challenges of Python
Despite its many advantages, Python does have some limitations:

1. Performance Issues
Python is slower than compiled languages like C++ due to its interpreted nature. This makes
it less suitable for performance-critical applications.
2. Weak Mobile Development Support
Python is not commonly used for mobile app development, as frameworks for this purpose
are less mature compared to Android or iOS native tools.
3. Global Interpreter Lock (GIL)
Python’s GIL limits the performance of multithreaded applications, particularly in CPU-
bound tasks.
4. Dependency Management
Managing dependencies in large projects can be challenging, although tools like virtualenv
and pipenv mitigate this issue.
CHAPTER 2
PYTHON LANGUAGE ESSENTIALS

Python is a powerful and versatile programming language known for its simplicity and ease of use.
To write effective Python programs, it is essential to understand its foundational elements, including
syntax, variables, data types, control flow structures, functions, and modules. This chapter delves
into the core building blocks of Python, equipping you with the tools to develop efficient and
maintainable code.

1. Basic Syntax
Python’s syntax is simple, clean, and easy to read, which makes it an excellent choice for beginners
and professionals alike. Here are some key points about Python's syntax:

 Indentation: Unlike many other languages, Python uses indentation (whitespace) to define
blocks of code. Consistency in indentation is crucial, as it determines the structure of your
code.

python
Copy code
# Example of indentation
if 5 > 3:
print("5 is greater than 3")
 Comments: Comments in Python begin with the # symbol. They are used to add
explanations or notes in the code and are ignored during execution.

python
Copy code
# This is a comment
print("Hello, World!") # Inline comment
 Case Sensitivity: Python is case-sensitive. For example, variable and Variable are considered
different identifiers.
 Statement Termination: Python does not require semicolons to terminate statements,
making the code cleaner and easier to read.

python
Copy code
print("Python is easy to learn")

2. Variables and Data Types


In Python, variables are used to store data values. Variables are dynamically typed, meaning you do
not need to declare their type explicitly.

 Defining Variables:

python
Copy code
name = "Alice" # String
age = 25 # Integer
height = 5.6 # Float
is_student = True # Boolean
 Data Types: Python supports a variety of data types:
o Numeric Types: int, float, complex
o Sequence Types: list, tuple, range
o Text Type: str
o Boolean Type: bool
o Set Types: set, frozenset
o Mapping Type: dict
o None Type: NoneType
 Type Conversion: Python allows converting one data type to another using functions like
int(), float(), str(), and bool().

python
Copy code
x = "100"
y = int(x) # Convert string to integer
print(y + 50) # Output: 150

3. Control Flow Statements


Control flow statements allow you to direct the execution flow of your program based on conditions
or iterations.

 Conditional Statements: Python uses if, elif, and else for decision-making.

python
Copy code
age = 20
if age < 18:
print("Minor")
elif age < 65:
print("Adult")
else:
print("Senior Citizen")
 Loops: Python supports for and while loops for iteration.

python
Copy code
# For loop
for i in range(5):
print(i)

# While loop
count = 0
while count < 3:
print("Count:", count)
count += 1
 Break and Continue: Use break to exit a loop prematurely and continue to skip the rest of
the loop's current iteration.

python
Copy code
for num in range(5):
if num == 3:
break
print(num)

4. Functions
Functions allow you to encapsulate reusable blocks of code, promoting modularity and readability.

 Defining Functions:

python
Copy code
def greet(name):
return f"Hello, {name}!"

print(greet("Alice"))
 Default Arguments:

python
Copy code
def greet(name="Guest"):
return f"Hello, {name}!"

print(greet()) # Output: Hello, Guest!


 Lambda Functions: Lambda functions are anonymous, single-expression functions.

python
Copy code
square = lambda x: x ** 2
print(square(4)) # Output: 16

5. Data Structures
Python provides powerful built-in data structures that help in storing and organizing data.

 Lists: Lists are mutable sequences used to store collections of items.

python
Copy code
fruits = ["apple", "banana", "cherry"]
fruits.append("orange")
print(fruits)
 Tuples: Tuples are immutable sequences, often used to represent fixed collections of items.
python
Copy code
coordinates = (10, 20)
print(coordinates)
 Dictionaries: Dictionaries store data as key-value pairs.

python
Copy code
student = {"name": "Alice", "age": 20}
print(student["name"])
 Sets: Sets store unique, unordered items.

python
Copy code
numbers = {1, 2, 3, 4}
numbers.add(5)
print(numbers)
6. Modules and Packages
Python allows code reusability through modules and packages.

 Importing Modules:

python
Copy code
import math
print(math.sqrt(16)) # Output: 4.0
 Creating a Module: Save the following code in a file named mymodule.py:

python
Copy code
def add(a, b):
return a + b
Then import it:

python
Copy code
from mymodule import add
print(add(5, 3)) # Output: 8
 Packages: Packages are directories containing multiple modules, with an __init__.py file.

7. Error and Exception Handling


Python uses try, except, and finally blocks to handle exceptions gracefully.
python
Copy code
try:
result = 10 / 0
except ZeroDivisionError as e:
print(f"Error: {e}")
CHAPTER 3
INTRODUCTION TO DATA SCIENCE
1.1 Data Science Overview
Data Science is the study of data. Like biological sciences is a study of biology, physical sciences, it’s
the study of physical reactions. Data is real, data has real properties, and we need to study them if
we’re going to work on them. Data Science involves data and some signs.
It is a process, not an event. It is the process of using data to understand too many different things, to
understand the world. Let Suppose when you have a model or proposed explanation of a problem, and
you try to validate that proposed explanation or model with your data.
It is the skill of unfolding the insights and trends that are hiding (or abstract) behind data. It’s when you
translate data into a story. So, use storytelling to generate insight. And with these insights, you can
make strategic choices for a company or an institution.
We can also define data science as a field which is about processes and systems to extract data of various
forms and from various resources whether the data is unstructured or structured.

Predictive Modeling:
Predictive modeling is a form of artificial intelligence that uses data mining and probability to forecast
or estimate more granular, specific outcomes
For example, predictive modeling could help identify customers who are likely to purchase our new
One AI software over the next 90 days.

Machine Learning:
Machine Learning is a branch of artificial intelligence (ai) where computers learn to act and adapt to
new data without being programmed to do so. The computer is able to act independently of human
interaction.

Forecasting:
Forecasting is a process of predicting or estimating future events based on past and present data and
most commonly by analysis of trends. "Guessing" doesn't cut it. A forecast, unlike a prediction, must
have logic to it. It must be defendable. This logic is what differentiates it from the magic 8 ball's lucky
guess. After all, even a broken watch is right two times a day.

Application of Data Science:-


Data science and big data are making an undeniable impact on businesses, changing day-to-day
operations, financial analytics, and especially interactions with customers. It's clear that businesses can
gain enormous value from the insights data science can provide. But sometimes it's hard to see exactly
how. So let's look at some examples. In this era of big data, almost everyone generates masses of data
every day, often without being aware of it. This digital trace reveals the patterns of our online lives. If
you have ever searched for or bought a product on a site like Amazon, you'll notice that it starts making
recommendations related to your search. This type of system known as a recommendation engine is a
common application of data science. Companies like Amazon, Netflix, and Spotify use algorithms to
make specific recommendations derived from customer preferences and historical behavior. Personal
assistants like Siri on Apple devices use data science to devise answers to the infinite number of
questions end users may ask. Google watches your every move in the world, you're online shopping
habits, and your social media. Then it analyzes that data to create recommendations for restaurants,
bars, shops, and other attractions based on the data collected from your device and your current
location. Wearable devices like Fitbits, Apple watches, and Android watches add information about
CHAPTER 4
NUMPY
Introduction to NumPy
NumPy (Numerical Python) is a fundamental Python library widely used in scientific computing,
data analysis, and machine learning. Its primary purpose is to provide support for large, multi-
dimensional arrays and matrices, along with a collection of mathematical functions to operate on
these arrays efficiently. NumPy is the foundation for many other Python libraries in the data science
ecosystem, such as Pandas, SciPy, and TensorFlow, making it an essential tool for any data
practitioner.

1. Why Use NumPy?


Python's standard lists and arrays are versatile but limited in terms of speed and efficiency when
working with large datasets. NumPy addresses these limitations by providing:

 Performance: NumPy arrays, also called ndarrays (N-dimensional arrays), are significantly
faster than Python lists because they are implemented in C. This makes operations on arrays
more efficient.
 Memory Efficiency: NumPy arrays are more memory-efficient than Python lists because
they store data in a contiguous block of memory, reducing overhead.
 Rich Functionality: NumPy includes a vast collection of mathematical and logical functions
that simplify operations on arrays, such as element-wise addition, broadcasting, and linear
algebra.
 Integration: NumPy seamlessly integrates with other scientific libraries and tools, forming
the backbone of Python’s data science and machine learning frameworks.

2. Key Features of NumPy


 N-dimensional Array: The core feature of NumPy is its N-dimensional array object, which
allows for efficient storage and manipulation of data. Arrays can have one dimension
(vectors), two dimensions (matrices), or more.
 Broadcasting: NumPy supports broadcasting, which enables arithmetic operations between
arrays of different shapes without requiring explicit resizing.
 Mathematical Operations: NumPy includes functions for mathematical operations such as
addition, subtraction, multiplication, division, exponentiation, trigonometry, and more.
 Linear Algebra: It supports matrix operations, eigenvalues, singular value decomposition
(SVD), and other linear algebra functions.
 Random Number Generation: NumPy provides tools for generating random numbers for
simulations, random sampling, and probabilistic modeling.
 Integration with Other Libraries: NumPy arrays serve as the standard data structure for
many Python libraries, including Pandas, SciPy, and scikit-learn.

3. NumPy Arrays
3.1 The N-dimensional Array (ndarray)
The ndarray is the central object in NumPy. Unlike Python lists, NumPy arrays are homogeneous,
meaning all elements in the array are of the same data type. This uniformity enables faster operations
and memory efficiency.

 Dimensions: Arrays can have any number of dimensions. For example:


o A 1D array (vector) stores a single list of values.
o A 2D array (matrix) contains rows and columns.
o Higher-dimensional arrays store tensors.
 Attributes: NumPy arrays come with attributes like:
o shape: Tuple representing the dimensions of the array.
o dtype: Data type of the array elements (e.g., integers, floats).
o size: Total number of elements in the array.
o ndim: Number of dimensions.

3.2 Array Creation


NumPy provides several ways to create arrays:

 From Lists or Tuples: Arrays can be initialized from Python sequences.


 Using Built-in Functions:
o zeros: Creates an array filled with zeros.
o ones: Creates an array filled with ones.
o arange: Creates an array with a range of values.
o linspace: Generates evenly spaced numbers between two points.
o random: Produces arrays with random values.

4. Array Operations
4.1 Element-wise Operations
NumPy supports element-wise arithmetic operations, such as addition, subtraction, multiplication,
and division, between arrays. These operations are applied to corresponding elements, making it
simple to perform bulk computations.

4.2 Broadcasting

Broadcasting allows operations between arrays of different shapes. NumPy automatically expands
the smaller array to match the dimensions of the larger one during computation.

4.3 Mathematical Functions


NumPy provides functions for a variety of mathematical operations, such as:

 Basic arithmetic: add, subtract, multiply, divide.


 Trigonometric functions: sin, cos, tan, etc.
 Exponential and logarithmic functions: exp, log, log10.
 Statistical operations: mean, median, std, var.
CHAPTER 5
DATA VISUALIZATION USING MATPLOTLIB AND PANDAS
Data visualization is an essential aspect of data analysis and communication. It enables us to present
complex data in a visual format, making it easier to interpret patterns, trends, and insights. Among
Python's many data visualization libraries, Matplotlib stands out as a versatile and widely-used tool.
Its simplicity and flexibility make it the foundation for other visualization libraries like Seaborn and
Plotly.

Core Components of Matplotlib


Matplotlib’s object-oriented structure revolves around three primary components:

 Figure: The overall window or canvas that contains everything in a plot. It is the top-level
container for the elements of the plot.
 Axes: The part of the figure where the data is plotted. A figure can have multiple axes, each
containing its own plot.
 Plot: The actual data representation in the form of lines, bars, points, etc., displayed within
the axes.

1. Introduction to Matplotlib
Matplotlib is a comprehensive library for creating static, interactive, and animated visualizations in
Python. It supports a wide range of plot types, from simple line plots to intricate 3D visualizations.
The library is highly customizable, allowing users to control every aspect of a plot, including axes,
labels, colors, and styles.

 Core Components of Matplotlib:


o Figure: The entire canvas or space where plots are drawn.
o Axes: The part of the figure where data is plotted (e.g., the chart area).
o Plot: The graphical representation of data, such as lines, bars, or scatter points.

2. Importance of Data Visualization


Data visualization serves multiple purposes:

 Simplifies Complex Data: Visual representations make it easier to understand large datasets.
 Identifies Patterns and Trends: Charts and graphs reveal insights that might not be apparent
from raw data.
 Communicates Insights Effectively: Well-designed visuals convey information clearly to
both technical and non-technical audiences.
 Facilitates Decision-Making: Stakeholders can make informed decisions based on visually
presented data.

3. Key Features of Matplotlib


Matplotlib offers several features that make it a powerful visualization tool:

 Customizability: Control over colors, markers, line styles, and annotations.


 Support for Various Plot Types: Line plots, bar charts, histograms, scatter plots, pie charts,
and more.
 Integration: Works seamlessly with other libraries like NumPy and Pandas.
 Interactive Capabilities: Enables zooming, panning, and saving plots in various formats.
4. Common Plot Types in Matplotlib
Matplotlib provides a wide array of visualization options, each suited to different types of data and
analysis objectives.

4.1 Line Plot


A line plot is ideal for visualizing data trends over a continuous interval, such as time series data.

 Use Case: Stock prices over time, temperature variations, or sales growth.
 Advantages: Effective for showing changes and trends.

4.2 Bar Chart


A bar chart represents categorical data using rectangular bars, where the length of each bar is
proportional to the value it represents.

 Use Case: Comparing sales across different regions, product categories, or age groups.
 Advantages: Easy comparison of discrete categories.

4.3 Histogram
A histogram displays the frequency distribution of a dataset by dividing data into intervals or bins.

 Use Case: Understanding the distribution of student test scores or income levels.
 Advantages: Reveals the underlying distribution of data.

4.4 Scatter Plot


A scatter plot represents relationships between two variables using points in a Cartesian coordinate
system.

 Use Case: Examining the correlation between variables like height and weight or sales and
marketing expenses.
 Advantages: Highlights outliers and the strength of relationships.

4.5 Pie Chart


A pie chart visualizes proportions of a whole as slices of a circle.

 Use Case: Showing the market share of companies or the composition of a population by age
groups.
 Advantages: Simple and effective for proportion data.

4.6 Box Plot


A box plot (or box-and-whisker plot) displays the distribution of data based on five summary
statistics: minimum, first quartile, median, third quartile, and maximum.

 Use Case: Analyzing variability in exam scores or financial returns.


 Advantages: Highlights outliers and spread.

5. Components of a Matplotlib Plot


A well-designed plot consists of multiple components:
1. Title: Describes the purpose of the plot.
2. Axes: Includes x-axis and y-axis with labels.
3. Legend: Explains what different colors or markers represent.
4. Grid: Enhances readability by adding background lines.
5. Annotations: Highlights specific points or features.

6. Customizing Visualizations
Matplotlib provides extensive customization options to create visually appealing and informative
plots:

 Colors and Styles: Choose from a range of colors, line styles, and marker types.
 Labels and Titles: Add meaningful titles, axis labels, and legends to improve clarity.
 Scaling and Ticks: Adjust scales (linear, logarithmic) and control the placement of ticks.
 Annotations: Highlight specific data points with text annotations or arrows.

Customization ensures that visualizations are tailored to the audience and the data being presented.

Plot Types in Matplotlib


Matplotlib supports a variety of plot types, each suitable for visualizing different kinds of data.
The most common plot types include:

 Line Plots: Used for visualizing data points connected by a line, ideal for time series data or
continuous data trends.
 Bar Plots: Useful for comparing different categories or groups, where each bar represents a
category and the length indicates its value.
 Histograms: Display the distribution of data by grouping values into bins. They are typically
used to analyze the frequency of data within intervals.
 Scatter Plots: Represent relationships between two variables using points on a Cartesian
plane, ideal for exploring correlations or trends.
 Pie Charts: Used to represent proportions of a whole, where each slice corresponds to a
category and its size reflects the proportion.
 Box Plots: Show the distribution of data based on five summary statistics (minimum, first
quartile, median, third quartile, and maximum).
 Heatmaps: Display data in matrix form where individual values are represented by colors,
useful for visualizing the intensity of a variable across two dimensions.

4. Creating a Plot in Matplotlib


1. Importing Matplotlib: The core Matplotlib library is imported, and the pyplot module is
usually imported as plt, which simplifies plotting.
2. Creating Data: Data can come from various sources, including NumPy arrays, Pandas
DataFrames, or raw lists.
3. Plotting Data: Use plotting functions like plot(), scatter(), bar(), or hist() to plot the data on a
figure.
4. Customizing the Plot: Add titles, axis labels, legends, and grid lines for clarity and context.
5. Displaying the Plot: Call plt.show() to display the plot on the screen.
PANDAS IN PYTHON

Pandas is a powerful and widely-used Python library designed for data analysis and manipulation.
Built on top of NumPy, it offers high-performance, easy-to-use data structures such as Series and
DataFrame, making it the go-to tool for handling structured data in Python. Whether dealing with
small datasets or massive data collections, Pandas simplifies data cleaning, transformation, and
analysis.

Key Features of Pandas

1. Data Structures:
o Series: A one-dimensional labeled array capable of holding any data type, such as
integers, strings, floats, or objects. It is similar to a column in a spreadsheet or a
database.
o DataFrame: A two-dimensional, tabular data structure with labeled rows and
columns. It resembles a table in relational databases or an Excel spreadsheet.
2. Data Handling:
o Easy reading and writing of data from and to various file formats like CSV, Excel,
JSON, SQL, and more.
o Efficient handling of missing data, such as filling, dropping, or interpolating values.
3. Data Manipulation:
o Filtering and selecting specific rows or columns.
o Merging, joining, and concatenating datasets.
o Grouping data for aggregation and analysis using the "groupby" functionality.
4. Data Transformation:
o Applying custom functions to transform data.
o Reshaping data with pivot tables and stack/unstack operations.
5. Data Analysis:
o Performing statistical and mathematical operations on data.
o Supporting time-series data for date-based analysis.

Advantages of Pandas

1. Ease of Use: Pandas provides intuitive and simple methods for data manipulation, making it
accessible to both beginners and experienced programmers.
2. Efficiency: Built on NumPy, Pandas ensures high performance for operations on large
datasets.
3. Versatility: It can handle various types of data, including structured, semi-structured, and
unstructured data.
4. Integration: Works seamlessly with other Python libraries like Matplotlib and SciPy,
enabling comprehensive data analysis workflows.

Applications of Pandas

1. Data Cleaning: Pandas simplifies the process of preparing raw data for analysis by handling
missing values, duplicates, and inconsistent formats.
2. Exploratory Data Analysis (EDA): Pandas provides statistical summaries, visualizations,
and tools for understanding dataset distributions.
3. Data Wrangling: It enables merging, reshaping, and reorganizing datasets, making them
suitable for analysis.
4. Time-Series Analysis: Pandas supports time-indexed data, making it ideal for analyzing
stock prices, weather patterns, and more.
5. Machine Learning: Often used to preprocess data before feeding it into machine learning
algorithms.

Pandas Data Structures


Series
A Series is a one-dimensional labeled array capable of holding any data type. It is similar to a
column in a spreadsheet.

 Creating a Series:
data = [10, 20, 30]
series = pd.Series(data)

DataFrame
A DataFrame is a two-dimensional, tabular data structure with labeled rows and columns, much like
an Excel spreadsheet.

 Creating a DataFrame:
data = {'Name': ['Alice', 'Bob'], 'Age': [25, 30]}
df = pd.DataFrame(data)

Loading Data
Pandas supports various data formats, making it easy to read and write data.

 CSV File:
df = pd.read_csv('file.csv')

 Excel File:
df = pd.read_excel('file.xlsx')

 SQL Database:
import sqlite3
conn = sqlite3.connect('database.db')
df = pd.read_sql_query("SELECT * FROM table_name", conn)

Exploring Data
Once the data is loaded into a DataFrame, you can explore it using the following methods:

 View the first few rows:


df.head()

 Get basic information about the DataFrame:


df.info()

 View statistical summary:


\df.describe()

 Check column names:


df.columns
 COMPARISION BETWEEN PANDAS AND MATPLOTLIB :

Aspect Matplotlib Pandas


Purpose Primarily used for creating Used for data manipulation,
visualizations like plots, charts, and analysis, and handling structured
graphs. datasets.

Data Structures Does not have its own data structure; Provides Series (1D) and
works with data from lists, arrays, or DataFrame (2D) for structured data
Pandas DataFrames. handling.

Focus Area Visualization and presentation of data. Data cleaning, manipulation, and
analysis.

Typical Outputs Generates plots such as line graphs, bar Outputs structured data that can be
charts, histograms, scatter plots, etc. saved, reshaped, or analyzed.

Integration Often used alongside Pandas for Integrates with Matplotlib to


visualizing its DataFrames or Series. visualize processed data.

Complexity Requires more effort to format and style Easier to use for data handling; less
plots; highly customizable. focus on visual representation.

Data Specialized for creating static, animated, Limited built-in visualization tools
Visualization and interactive visualizations. (e.g., .plot() method).

Library Works independently but benefits from Relies on libraries like Matplotlib or
Dependency integration with other libraries like Seaborn for advanced
Pandas and NumPy. visualizations.

File Formats Can save plots in formats like PNG, Can export structured data to
Supported SVG, PDF, etc. formats like CSV, Excel, JSON, etc.

Customization High degree of control over visual Limited customization for visual
elements like axes, labels, legends, and outputs; focuses on data content
styles. rather than style.

Learning Curve Steeper learning curve due to detailed Relatively easier to learn for basic
plotting configurations. data operations.

Example Use Creating a bar chart to show sales by Calculating the total sales by region
Case region. and preparing data for visualization.
CHAPTER 6
BASICS OF TABLEAU

Tableau is one of the most widely used data visualization and business intelligence (BI) tools in the
world. It enables users to connect to various data sources, analyze the data, and create interactive and
shareable dashboards and visualizations. Tableau is known for its intuitive drag-and-drop interface,
which allows users to build complex visualizations and insights without the need for programming
knowledge. This chapter will provide an overview of the basics of Tableau, including its features,
components, and the steps involved in creating and sharing data visualizations.

1. Introduction to Tableau
Tableau is a powerful data visualization tool used by individuals and organizations to explore and
analyze data in a visual format. It allows users to create a wide range of visualizations like bar charts,
line graphs, heatmaps, scatter plots, and more, making it easier to identify patterns, trends, and
outliers in the data. Tableau provides an interactive environment where users can drill down into
data, filter views, and explore different angles of the dataset.

Tableau comes in different versions that cater to different needs:

 Tableau Desktop: The primary version of Tableau for individual users. It allows users to
connect to various data sources, analyze the data, and create visualizations and dashboards.
Tableau Desktop offers two editions: Personal (for individual use) and Professional (for
sharing and working with more diverse data sources).
 Tableau Server: A web-based platform that allows organizations to share, collaborate, and
distribute Tableau reports and dashboards across teams and stakeholders.
 Tableau Online: A cloud-based version of Tableau Server. It provides similar features but is
hosted and managed by Tableau.
 Tableau Public: A free version of Tableau that allows users to create visualizations and
publish them online. However, data and workbooks created in Tableau Public are publicly
available, meaning that users cannot keep their data private.
 Tableau Prep: A tool designed for data preparation and cleaning, enabling users to
transform, clean, and shape data before using it for visualization.

2. Key Features of Tableau


Tableau is known for its wide array of features that make data analysis and visualization efficient and
easy. Some of the key features include:

 Data Connectivity: Tableau can connect to various data sources, including Excel, SQL
databases, Google Analytics, cloud-based data like Amazon Redshift, and even web data
connectors. Tableau provides connectors for more than 40 data sources, ensuring flexibility
and scalability.
 Drag-and-Drop Interface: One of the standout features of Tableau is its intuitive drag-and-
drop interface. Users can simply drag fields from their data into the Tableau workspace to
create different types of visualizations without needing to write any code.
 Interactive Dashboards: Tableau allows users to create interactive dashboards where they
can click on specific data points to filter data in real-time. This interaction makes it easier for
users to explore data and gain deeper insights.
 Real-Time Data Analysis: Tableau provides real-time data updates by connecting directly to
live data sources. This ensures that the visualizations and dashboards always reflect the most
up-to-date data without the need for manual updates.
 Calculated Fields: Tableau allows users to create calculated fields using its in-built formula
language. These fields are useful for performing mathematical or logical calculations on the
data.
 Data Blending: Tableau can blend data from different data sources, enabling users to
combine information from multiple databases into a single visualization or dashboard.
 Advanced Analytics: Tableau supports advanced analytics features such as trend lines,
forecasting, reference lines, and clustering. These features help users to uncover insights from
the data beyond simple visualizations.

3. Tableau Interface and Workspace


The Tableau interface is designed to be user-friendly, with several key components that help users
organize their work and interact with data. Here are the major components of Tableau’s workspace:

 Data Pane: The Data Pane is located on the left side of the workspace and lists all the data
fields from the connected data source. These fields are categorized into dimensions
(qualitative data) and measures (quantitative data).
 Shelves: Shelves are areas in the Tableau workspace where users can drag and drop fields to
create visualizations. Some of the key shelves include:
o Rows Shelf: Placing fields here will create rows in the visualization.
o Columns Shelf: Placing fields here will create columns in the visualization.
o Filters Shelf: Users can apply filters to the data by placing fields here.
o Marks Card: The Marks Card controls the appearance of a visualization. It allows users to
adjust things like color, size, detail, and shape of data points.
 Worksheet: A worksheet is where users create individual visualizations. Each worksheet can
contain one visualization, such as a bar chart or pie chart, based on the data fields added to
the rows, columns, and marks.
 Dashboard: A dashboard is a collection of multiple worksheets and visualizations displayed
together on a single canvas. Dashboards allow users to see multiple perspectives of the data at
once.
 Story: A Story in Tableau is a sequence of visualizations that work together to convey a
narrative. It can be used to tell a data-driven story, guiding users through insights and
observations step by step.

4. Data Preparation and Transformation in Tableau


Before creating visualizations, it's essential to prepare and clean the data. Tableau offers several
features to make this process easier:

 Data Connection: To begin, Tableau connects to a wide range of data sources. Whether it's
an Excel file, an SQL database, or a cloud data platform, Tableau allows users to import data
quickly. Tableau automatically detects data types and provides an overview of the data.
 Data Shaping: Once data is loaded, Tableau allows users to shape and transform it according
to their needs. Users can filter out unnecessary data, join or merge datasets, and pivot or
unpivot data to create the right structure for analysis.
 Data Blending: When working with multiple data sources, Tableau allows users to blend
data to bring it together into a unified view. This is particularly useful when working with
data from different departments or systems.
CHAPTER 7
WORKING WITH DATABASES IN PYTHON

Working with databases is a fundamental skill in modern software development and data science.
Databases are used to store, retrieve, and manage large amounts of structured data efficiently. In this
chapter, we will explore the basics of working with databases using Python, focusing on key
concepts such as database types, SQL queries, and integrating Python with relational databases like
SQLite, MySQL, and PostgreSQL.

1. Introduction to Databases
A database is an organized collection of data that can be easily accessed, managed, and updated. In
the context of software applications, databases are used to store data such as user information,
transaction records, product details, and much more. Databases allow for efficient storage and
retrieval of data, which is crucial for the performance and scalability of applications.

Databases can be classified into two main categories:

 Relational Databases: These databases store data in tables, which consist of rows and
columns. The relationships between the data are defined by keys. Relational databases use
Structured Query Language (SQL) to manage and manipulate data. Common examples
include MySQL, PostgreSQL, and SQLite.
 Non-relational (NoSQL) Databases: Unlike relational databases, NoSQL databases do not
store data in tabular forms. They are more flexible and can store data in various formats like
key-value pairs, documents, or graphs. Popular NoSQL databases include MongoDB,
Cassandra, and Redis.

In this chapter, we will focus on relational databases and how Python can interact with them to
perform various operations such as querying data, inserting records, and updating or deleting
information.

2. Setting Up Database Environment


Before you can interact with a database, it is essential to set up a database management system
(DBMS). This system provides the tools to create, manage, and interact with the database.

1. SQLite: SQLite is a lightweight, file-based database system that requires no server or setup
process. It is built into Python’s standard library, which makes it a great option for small
applications or learning purposes.
2. MySQL: MySQL is one of the most widely used relational databases. It is often used in web
development, data warehousing, and enterprise applications. MySQL requires setting up a
server and defining database connections.
3. PostgreSQL: PostgreSQL is an open-source relational database that emphasizes extensibility
and SQL compliance. It is used in large-scale applications and systems that require complex
queries and transactional support.

3. Understanding SQL (Structured Query Language)


SQL is a standard programming language used to manage and manipulate relational databases. It
allows users to define, query, and modify data in databases. SQL queries are typically divided into
several categories:
 Data Query Language (DQL): Used to retrieve data from the database. The most common
SQL query in DQL is SELECT, which fetches data from one or more tables.
 Data Definition Language (DDL): Used to define database structures, such as creating and
modifying tables. Key DDL commands include CREATE, ALTER, and DROP.
 Data Manipulation Language (DML): Used to manipulate data within the database. Key
DML commands include INSERT, UPDATE, and DELETE.
 Data Control Language (DCL): Used to control access to data. Commands such as GRANT
and REVOKE fall under this category.

A basic understanding of SQL is necessary when working with databases in Python. SQL allows you
to retrieve, modify, and organize data in powerful ways, making it an indispensable tool for anyone
working with relational databases.

4. Connecting Python to Databases


To interact with a database from Python, you need a database connector or driver. This software
acts as a bridge between Python and the database, allowing you to send SQL commands and retrieve
results.

Python has several libraries to connect with various types of databases:


 SQLite: Python’s standard library includes the sqlite3 module, which allows seamless
interaction with SQLite databases.
 MySQL: The mysql-connector-python library enables Python applications to communicate
with MySQL databases.
 PostgreSQL: The psycopg2 library is a popular choice for connecting Python to PostgreSQL
databases.
Once the connection is established, Python can send SQL queries to the database, retrieve the results,
and even modify the data.
5. Executing SQL Queries
Once you have connected Python to a database, you can execute SQL queries to perform various
actions such as retrieving or modifying data.

 SELECT Query: The SELECT statement is used to retrieve data from one or more tables.
You can filter, sort, and aggregate the data using various clauses like WHERE, ORDER BY,
and GROUP BY.
 INSERT Query: The INSERT INTO statement adds new records into a table. It allows you
to specify the columns and values for the new rows.
 UPDATE Query: The UPDATE statement modifies existing records in a table. You can
specify which rows to update using the WHERE clause.
 DELETE Query: The DELETE statement removes one or more records from a table. It is
important to use the WHERE clause to avoid deleting all records in the table.

SQL queries allow for powerful and flexible interaction with data. With Python, these queries can be
dynamically generated, executed, and the results processed for further analysis or reporting.

6. Transactions and Data Integrity


A transaction is a sequence of one or more SQL operations that are executed as a single unit. A
transaction ensures that a series of operations are completed successfully or, if there is an error, no
changes are made to the database. This is crucial for maintaining data integrity and consistency.
CHAPTER 8
BASICS OF POWER BI

Power BI is a powerful business analytics tool developed by Microsoft. It allows users to visualize
data, share insights, and make data-driven decisions by transforming raw data into interactive and
meaningful visualizations. Whether for business intelligence, data analysis, or reporting, Power BI
enables users to access and analyze data from various sources and create dynamic dashboards and
reports. This chapter covers the basics of Power BI, including its components, interface, data
loading, visualization features, and how it enhances business decision-making.

1. Introduction to Power BI
Power BI is a suite of business analytics tools that enables users to visualize data, share insights, and
make informed decisions. It allows businesses to consolidate data from different sources into a single
platform to analyze and create interactive reports and dashboards. Power BI is widely used for data
visualization and reporting across industries such as finance, marketing, healthcare, retail, and more.

Power BI consists of three main products:

 Power BI Desktop: A free, downloadable application for PC users that allows users to
connect to, transform, and visualize data.
 Power BI Service: A cloud-based platform where users can share reports and dashboards and
collaborate with others.
 Power BI Mobile: A mobile app that allows users to access reports and dashboards on their
smartphones and tablets.

Power BI also offers tools for advanced analytics, including natural language querying (Q&A) and
AI-powered insights, which enable users to ask questions of their data and receive instant answers.

2. Components of Power BI
Power BI consists of several key components that enable users to work with data efficiently:

 Power BI Desktop: The primary tool used for data transformation, visualization, and report
creation. It allows users to connect to a wide range of data sources, perform data cleansing,
and create complex reports and dashboards.
 Power BI Service: A cloud service where users can publish and share Power BI reports. It
allows for collaboration and sharing of interactive dashboards across teams and
organizations. The service also provides features for scheduling report updates and setting up
alerts.
 Power BI Gateway: A bridge that facilitates secure data transfer between on-premises data
sources and the Power BI Service. It ensures that reports and dashboards reflect up-to-date
information from internal systems.
 Power BI Mobile: A mobile application that enables users to view and interact with Power
BI reports and dashboards on mobile devices.
 Power BI Embedded: A service that allows developers to embed Power BI reports and
dashboards into third-party applications or websites.
 Power Query: A tool within Power BI Desktop used for extracting, transforming, and
loading (ETL) data from different sources into Power BI. It enables users to clean and
transform data before visualizing it.
 Power Pivot: A data modeling feature that allows users to create relationships between
different datasets, build calculations using Data Analysis Expressions (DAX), and manage
complex data models.

3. Data Loading and Transformation


Before creating visualizations in Power BI, it’s crucial to load and transform data. Power BI allows
users to connect to a variety of data sources, including databases, spreadsheets, cloud services, and
web APIs.

 Connecting to Data Sources: Power BI supports a wide range of data sources, including
SQL Server, Excel, Google Analytics, Salesforce, SharePoint, and more. Users can connect
to these sources through Power BI Desktop and retrieve data for analysis.
 Power Query Editor: Once the data is loaded into Power BI, users can use the Power Query
Editor to clean and transform the data. The editor allows users to:
o Remove or filter rows and columns
o Replace missing values
o Change data types
o Merge and append tables
o Apply custom transformations

Data transformation is essential for ensuring that the dataset is clean, structured, and ready for
analysis. Power Query allows users to perform complex transformations with an easy-to-use
interface.

4. Data Modeling and Relationships


In Power BI, data modeling is the process of defining relationships between different datasets to
enable complex analysis. Power BI allows users to create relationships between tables using common
fields or columns. These relationships help in creating more meaningful visualizations by linking
related data.

 Creating Relationships: Power BI automatically detects relationships between tables based


on column names. However, users can manually create relationships by defining keys
between tables (e.g., primary key and foreign key).
 Types of Relationships:
o One-to-One: Each record in one table corresponds to a single record in another table.
o One-to-Many: One record in one table corresponds to multiple records in another
table. This is the most common type of relationship.
o Many-to-Many: Multiple records in one table correspond to multiple records in
another table.
 Data Model: The data model in Power BI is a collection of tables, relationships, and
calculated columns. By designing an effective data model, users can enable more flexible and
comprehensive analysis across multiple data sources.

5. Creating Visualizations
One of the key strengths of Power BI is its ability to create rich and interactive visualizations. These
visualizations allow users to explore data from various perspectives, identify trends, and make
informed decisions.
 Types of Visualizations:
o Bar and Column Charts: Useful for comparing values across categories.
o Line and Area Charts: Ideal for showing trends over time.
o Pie and Donut Charts: Best for showing parts of a whole.
o Scatter Plots: Used for showing relationships between two variables.
o Maps: Power BI includes map visualizations to display geographical data, such as
sales by region or country.
o Tables and Matrices: Useful for displaying detailed data in a tabular format.
o Cards and KPIs: Display key metrics such as revenue, growth, or profit in a simple
and concise manner.
 Interactive Features:
o Drill-Through: Allows users to right-click on a visualization to explore more detailed
data in a separate report page.
o Cross-Filtering: When users click on a data point in one visualization, it
automatically filters other visualizations on the same report page.
o Slicers: Visual filters that allow users to dynamically filter data based on categories
like dates, regions, or products.

These features make Power BI a highly interactive tool, allowing users to explore data and uncover
insights in real-time.

6. Using DAX (Data Analysis Expressions)


DAX is a formula language used in Power BI to create calculated columns, measures, and tables.
DAX functions are similar to Excel formulas but are designed specifically for working with
relational data in Power BI.

 Calculated Columns: These are columns added to a table that are computed based on a DAX
formula. For example, you can create a calculated column to compute the profit margin by
subtracting costs from revenue.
 Measures: Measures are calculations performed on aggregated data. For example, a measure
might calculate the total sales across all regions or the average order value.
 Time Intelligence: DAX includes time-based functions that allow users to perform
calculations across different time periods. For example, you can calculate year-over-year
growth or compare sales between different months.
CONCLUSION

This course has provided an in-depth exploration of some of the most powerful tools and techniques
used in the fields of data analysis, visualization, and business intelligence. We started with the basics
of Python programming, moved on to advanced data cleaning techniques with Pandas, and explored
visualization methods with Matplotlib, before diving into practical applications such as working with
databases and creating dynamic dashboards using Power BI and Tableau. Each section has
contributed to building a comprehensive skill set, enabling learners to effectively process, analyze,
and visualize data to drive informed business decisions.

Python Programming Basics

The journey began with an introduction to Python, one of the most versatile and widely used
programming languages in data science. Python’s simplicity, readability, and extensive libraries
make it an ideal tool for both beginners and professionals in data analysis. Through this course,
learners gained foundational knowledge in Python, from its syntax to advanced techniques for
working with data structures, loops, and functions. This provided a solid grounding for tackling more
complex data tasks and understanding the principles of software development in data-related fields.

Data Cleaning with Pandas

One of the most crucial steps in data analysis is data cleaning. The course took a deep dive into
Pandas, a Python library that provides powerful data manipulation and analysis tools. Learners were
introduced to data cleaning methods such as handling missing data, filtering, and transforming
datasets, which are essential skills for ensuring data accuracy. With this knowledge, learners can now
confidently prepare raw data for analysis, ensuring that the insights they derive are both reliable and
actionable.

Data Visualization with Matplotlib

Effective communication of data insights relies on clear, impactful visualizations. This course
covered the use of Matplotlib, a widely used Python library for creating static, animated, and
interactive visualizations. Learners explored various chart types, including line plots, bar charts, and
scatter plots, learning how to choose the right visualization for different data types and analysis
goals. By mastering these techniques, learners are now equipped to present complex data in an
understandable and visually appealing manner.

Power BI for Business Intelligence

The Power BI section of the course introduced learners to a leading business intelligence tool widely
used across industries for data visualization, reporting, and decision-making. Through Power BI,
learners discovered how to connect to multiple data sources, create dynamic reports, and share
insights with stakeholders. The course emphasized the importance of interactivity in data
visualization, allowing users to drill down into the data and derive meaningful insights. Power BI’s
powerful features, such as calculated fields and real-time data updates, have empowered learners to
create comprehensive dashboards and reports that support informed decision-making processes in a
business context.

Tableau for Data Analytics


Tableau, another highly regarded business intelligence tool, was explored to complement Power BI
in providing a more versatile approach to data analysis and visualization. Through Tableau’s
intuitive interface, learners gained skills in connecting to various data sources, creating
visualizations, and building interactive dashboards. The focus on real-time data exploration,
calculated fields, and the seamless creation of storytelling dashboards makes Tableau a critical tool
for presenting complex data in an accessible way. Learners are now equipped to use Tableau’s
powerful visualizations to unlock deeper insights and communicate data-driven stories effectively.

Final Thoughts

By integrating tools like Python, Pandas, Matplotlib, Power BI, and Tableau, this course has
provided a holistic approach to data analysis and visualization. The ability to work with raw data,
clean and transform it, and then use powerful tools to visualize it, is a highly sought-after skill in
today’s data-driven world. Throughout the course, learners have not only gained technical skills but
also developed a mindset for approaching data problems methodically and creatively.

In conclusion, this course has laid a strong foundation for anyone looking to pursue a career in data
science, business intelligence, or data analytics. The combination of programming skills, data
manipulation techniques, and visualization expertise enables learners to confidently tackle real-world
data challenges and make impactful, data-driven decisions.

You might also like