0% found this document useful (0 votes)
9 views

Statistics Assignment 2

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Statistics Assignment 2

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Statistics Assignment -2

LAKSHMANAN R (MT24AAI044)

Q1. Define the following terms:

1. Strictly stationary process

2. Weakly (or wide sense) stationary process How do these two definitions differ?
Provide examples of each from daily life (e.g., temperature variations, stock
prices).

A stationary process in statistics and time series analysis refers to a stochastic


(random) process whose statistical properties, such as mean and variance, do not
change over time. Two common types are strictly stationary and weakly stationary
processes.

1. Strictly Stationary Process

A strictly stationary process is a time series where the joint probability distribution
does not change when shifted in time. This means that all statistical properties (mean,
variance, higher moments) are invariant to time shifts. Formally, a process X(t) is strictly
stationary if for any times t1,t2,…,tn, the joint distribution of (X(t1),X(t2),…,X(tn)) is the
same as that of (X(t1+τ),X(t2+τ),…,X(tn+τ)) for any time shift τ.

Example: Suppose the number of certain particles passing through a sensor each
minute is constant on average and randomly fluctuates around this mean, but the entire
distribution of counts remains unchanged over time. Such processes could be
encountered in controlled experiments in physics.

2. Weakly (or Wide-Sense) Stationary Process

A weakly stationary process (also called wide-sense or second-order stationary) is a


process where only the first two moments (mean and covariance) are invariant to time
shifts. Specifically, X(t)X(t)X(t) is weakly stationary if:

• The mean E[X(t)] is constant over time.

• The variance Var(X(t)) is constant over time.

• The covariance Cov(X(t),X(t+τ)) depends only on the time difference τ, not on the
specific times t and t+τ.
Example: Daily temperatures can be approximately weakly stationary over short
periods (such as a month), where mean temperature and variance are stable, but they
may not hold strictly over a year because of seasonal effects.

Key Differences

• Strict Stationarity is more stringent, requiring the entire probability distribution


to remain unchanged over time shifts.

• Weak Stationarity only requires mean and covariance stability over time, which
is often easier to satisfy in real-world data.

Daily Life Examples

• Strictly Stationary: A controlled experiment in a lab where the random error


(noise) in measurements follows the same distribution continuously.

• Weakly Stationary: Stock prices may exhibit weak stationarity within a short
trading period if the average price and volatility remain stable. However, they are
not strictly stationary over long periods due to trends and external influences.

Q2. For a process to be weakly stationary, the mean and autocovariance must
satisfy specific properties. State these properties and explain their significance in
the context of random processes.

For a process to be weakly stationary (or wide-sense stationary), it must satisfy


specific properties regarding its mean and autocovariance. These properties are
essential to ensure that the process exhibits stability in its statistical characteristics
over time, allowing for meaningful analysis in fields like signal processing, finance, and
engineering.

Properties of a Weakly Stationary Process

1. Constant Mean:

o The mean of the process, E[X(t)], must be constant over time.

o Mathematically: E[X(t)]=μ, where μ is a constant for all t.

Significance: A constant mean indicates that the process does not have a long-term
trend. This stability in the mean ensures that any observed fluctuations are due to
random variations rather than systematic changes over time.

2. Constant Variance (Time-Invariant):

o The variance of the process, Var(X(t)) =E[(X(t)−μ)2], must also be constant


over time.
o Mathematically: Var(X(t)) = σ2, where σ2 is a constant for all t.

Significance: Constant variance implies that the process's fluctuations around the
mean are of similar magnitude over time. This prevents periods of unusually high or low
variability, making the process more predictable and stable.

3. Time-Invariant Autocovariance:

o The autocovariance function Cov(X(t),X(t+τ)), should depend only on the


time difference τ (the lag), not on the specific time t.

o Mathematically: Cov(X(t), X(t+τ)) = γ(τ), where γ(τ) is a function of τ alone.

Significance: The autocovariance function describes the degree of dependence


between values of the process at different times. By depending only on the lag τ, this
property ensures that the relationship between values at different times is stable. It
allows us to characterize the memory or persistence of the process without concern for
a time-dependent change in structure.

Why These Properties Matter in the Context of Random Processes

In practical applications, weak stationarity allows for consistent estimation of statistical


parameters. For instance:

• In Signal Processing: It is essential to assume weak stationarity when designing


filters or analysing signals to ensure the system's response remains consistent
over time.

• In Finance: Weak stationarity assumptions in stock price returns or asset


returns allow analysts to estimate risk and correlations reliably, making models
more robust and easier to interpret.

• In Time Series Analysis: Many statistical methods, such as the computation of


autocorrelations or spectral analysis, rely on weak stationarity to produce valid,
stable results.

Q3. Explain the role of autocovariance in determining whether a process is


stationary. What does it mean when the autocovariance is zero at different time
points?

The autocovariance function plays a crucial role in determining whether a process is


stationary, as it quantifies the relationship between values of a random process at
different times. Specifically, the autocovariance measures how much two values of a
process, separated by a given time lag τ, co-vary around the mean.

Role of Autocovariance in Stationarity


For a process to be weakly stationary, its autocovariance must meet specific criteria:

• Time Invariance: The autocovariance function γ(τ)=Cov(X(t),X(t+τ)) must depend


only on the time lag τ (the difference between two time points) and not on the
specific time t. This time-invariance means that the dependency structure
between points remains constant over time, ensuring stability in the process’s
behaviour.

• Decay Over Time for Stationarity: In many stationary processes, the


autocovariance decreases as the lag τ increases. This gradual decay indicates
that values of the process become less correlated as the time difference
between them increases. In a strictly stationary process, the autocovariance
function would be consistent over time.

If a process's autocovariance satisfies these requirements, we can reasonably assume


the process is stationary, making it easier to model, predict, and analyse.

When Autocovariance is Zero at Different Time Points

If the autocovariance is zero for a particular time lag τ, it implies that the values of the
process at those points are uncorrelated:

• For instance, if γ(τ)=0 when τ=1, this means that two consecutive values of the
process, separated by one time unit, are uncorrelated. This lack of correlation
suggests that knowing the value of the process at one time point gives no
predictive information about its value one time unit later.

• When autocovariance is zero at larger lags (e.g., γ(τ)=0 for τ>k), it indicates that
values spaced by τ>k are independent or at least uncorrelated over that interval,
which can be characteristic of some random (or white noise) processes.

Implications of Zero Autocovariance

• White Noise: A process with zero autocovariance at all non-zero lags (i.e., γ(τ)=0
for all τ>0) is called a white noise process. This means the process has no
memory of past values, and each value is independent of others.

• Short Memory: Processes with autocovariances that quickly drop to zero for
larger lags are said to have "short memory." This is typical in processes where
past values have minimal impact on future values, making them easier to
forecast over short intervals.

In summary, autocovariance reflects how much past values influence future values in a
stationary process. When autocovariance is zero at certain lags, it indicates
independence or lack of correlation, which often simplifies modelling and analysis.
Q4. Explain the relationship between Gaussian processes and stationary
processes. What additional conditions must hold for a Gaussian process to be
stationary?

A Gaussian process and a stationary process are closely related concepts in time
series analysis and stochastic processes, but they have distinct definitions.

Gaussian Processes

A Gaussian process is a type of stochastic process where every collection of random


variables X(t1),X(t2),…,X(tn) (for any choice of time points t1,t2,…,tn) follows a
multivariate normal (Gaussian) distribution. The process is fully characterized by its
mean function E[X(t)] and its covariance function Cov(X(t),X(s)) for all pairs of times t
and s.

Stationary Processes

A stationary process is a stochastic process whose statistical properties (such as


mean, variance, and autocovariance) do not change over time. In the case of a weakly
stationary process:

• The mean E[X(t)] is constant over time.

• The variance Var(X(t)) is constant over time.

• The autocovariance Cov(X(t),X(t+τ)) depends only on the time difference τ and


not on specific times t and t+τ.

Relationship Between Gaussian Processes and Stationary Processes

A Gaussian process can be stationary, but not all Gaussian processes are stationary.
For a Gaussian process to be stationary, it must satisfy the conditions of weak
stationarity, which involve additional constraints on the mean and covariance:

1. Constant Mean: The mean function E[X(t)] of the Gaussian process must be
constant over time. For stationarity, this mean should not vary with time;
otherwise, the process would exhibit a trend.

2. Time-Invariant Covariance (Autocovariance Function): The covariance


function Cov(X(t),X(t+τ)) must depend only on the time difference τ, not on the
absolute times t and t+τ.

When these conditions hold, a Gaussian process is both Gaussian and stationary.
Because the Gaussian process is fully defined by its mean and covariance functions,
satisfying these two requirements is sufficient to ensure that a Gaussian process is
stationary. In other words, if a Gaussian process has a constant mean and a time-
invariant covariance function, it is weakly stationary, and in this case, weak
stationarity also implies strict stationarity due to the properties of the Gaussian
distribution.

Summary

• A stationary Gaussian process has both a constant mean and a covariance that
depends only on the time lag.

• For non-Gaussian processes, weak stationarity does not imply strict


stationarity. However, in the Gaussian case, weak stationarity is sufficient to
imply strict stationarity, making Gaussian processes easier to analyze when
stationary assumptions hold.

Example

A simple example of a stationary Gaussian process is white noise with a Gaussian


distribution, where:

• The mean is constant (often zero).

• The covariance is zero for any non-zero lag (i.e., no autocorrelation), and the
variance is constant.

This process is stationery and Gaussian, as all requirements for weak stationarity are
met, and because it is Gaussian, it is also strictly stationary.

Q5. Describe two examples of strictly stationary processes and two examples of
weakly stationary processes. Justify why each process satisfies the required
conditions.

Examples of Strictly Stationary Processes

1. White Noise Process with a Gaussian Distribution

o Description: A white noise process is a sequence of random variables


where each value is independent and identically distributed (i.i.d.) with a
mean of zero and a constant variance σ2\sigma^2σ2. When these values
are normally distributed, we have Gaussian white noise.

o Justification:

▪ The joint distribution of any set of observations remains the same


regardless of time shifts, as each value is drawn from the same
Gaussian distribution with constant mean and variance.
▪ Since there is no correlation between values at any time lags, the
covariance between any two time points (other than at lag zero) is
zero. This satisfies strict stationarity, as the joint distribution does
not change over time.

2. Stationary Poisson Counting Process

o Description: In a stationary Poisson process, the number of events


occurring in any time interval of fixed length follows a Poisson distribution
with a constant rate λ. This type of process is often used to model
random, independent events over time, like the arrival of customers in a
store or the occurrence of accidents in a region.

o Justification:

▪ For any interval of time, the number of events follows the same
Poisson distribution, independent of the specific time at which the
interval occurs.

▪ The distribution of counts is therefore time-invariant, meeting the


condition for strict stationarity as any time shift of the interval
results in the same distribution of counts.

Examples of Weakly Stationary Processes

1. Daily Temperature Variations Around a Seasonal Mean

o Description: Over short time intervals (e.g., a month), daily temperature


fluctuations can be modelled as weakly stationary. These variations may
have a relatively stable mean, as well as a stable variance and
autocorrelation structure over this period.

o Justification:

▪ The mean temperature over a month is approximately constant,


and the variability around this mean remains relatively stable.

▪ The autocovariance function depends only on the lag between


days and not on the specific days themselves, satisfying weak
stationarity conditions. However, over a full year, temperature is
not strictly stationary due to seasonality, so it doesn’t meet strict
stationarity.

2. Stock Return Series (Over Short Periods)

o Description: Stock returns over short time frames (such as daily or


weekly returns within a month) are often modelled as weakly stationary,
with an average return close to zero and stable volatility (variance) over
time.

o Justification:

▪ The mean daily return is typically close to zero over short periods,
and the variance, representing volatility, remains relatively
constant.

▪ The autocovariance function, if it exists, depends on the lag


between time points rather than specific dates, as correlations
between returns tend to be relatively stable over short periods.
However, over longer periods, market trends and changing
volatilities cause stock returns to deviate from stationarity.

Summary

• Strictly Stationary Processes: White noise with a Gaussian distribution and a


stationary Poisson process. These processes have joint distributions that do not
change with time shifts.

• Weakly Stationary Processes: Daily temperature variations around a mean


(over a month) and short-term stock returns. These processes have stable mean,
variance, and autocovariance over short periods, satisfying weak stationarity but
not strict stationarity.

Q6. Write a Python program that simulates a weakly stationary Gaussian process
with a specified mean and autocovariance function. Visualize the generated
process and plot its autocovariance function.

import numpy as np

import matplotlib.pyplot as plt

from scipy.linalg import toeplitz

# Parameters

mean = 5.0 # Mean of the process

variance = 1.0 # Variance of the process

n = 100 # Number of time points in the process


tau_max = 20 # Maximum lag for autocovariance plot

# Define the autocovariance function

def autocovariance_function(tau):

# Exponential decay as a simple example of weakly stationary covariance

# autocov(tau) = variance * exp(-|tau|/5)

return variance * np.exp(-np.abs(tau) / 5)

# Generate the covariance matrix based on the autocovariance function

covariance_matrix = toeplitz([autocovariance_function(tau) for tau in range(n)])

# Generate a sample from the Gaussian process

# The mean vector is just a constant array

mean_vector = mean * np.ones(n)

process_sample = np.random.multivariate_normal(mean_vector, covariance_matrix)

# Plot the generated process

plt.figure(figsize=(12, 6))

plt.plot(process_sample, label='Simulated Gaussian Process')

plt.axhline(mean, color='red', linestyle='--', label='Mean')

plt.title("Simulated Weakly Stationary Gaussian Process")

plt.xlabel("Time")

plt.ylabel("Value")

plt.legend()

plt.show()

# Plot the autocovariance function of the generated process

lags = np.arange(tau_max)

autocovariances = [np.cov(process_sample[:-lag], process_sample[lag:])[0, 1] for lag in


lags]
plt.figure(figsize=(12, 6))

plt.stem(lags, autocovariances, use_line_collection=True)

plt.title("Autocovariance Function of the Simulated Process")

plt.xlabel("Lag")

plt.ylabel("Autocovariance")

plt.show()
To simulate a weakly stationary Gaussian process, we can create a time series where
values are drawn from a Gaussian distribution with a specified mean and autocovariance
function. We will use Python libraries like numpy to generate the time series data and
matplotlib to visualize both the process and its autocovariance function.

1. Parameters: The process has a specified mean, variance, and number of time
points.

2. Autocovariance Function: We define an exponentially decaying autocovariance


function, which is a common choice for weakly stationary processes. This
function provides positive autocovariance at small lags and reduces towards
zero as lag increases.

3. Covariance Matrix: The Toeplitz structure of the covariance matrix ensures that
each element depends only on the time difference (lag), making it consistent
with weak stationarity.

4. Simulating the Process: Using numpy.random.multivariate_normal, we


generate a realization of the Gaussian process with the specified mean and
covariance structure.

5. Visualization:

o The time series of the simulated Gaussian process is plotted.

o The empirical autocovariance function is calculated from the generated


data and plotted against the lag.

This approach provides both a simulation of a weakly stationary Gaussian process and
an empirical view of its autocovariance function, illustrating weak stationarity visually.

Q7. Choose a time series dataset (such as daily temperature or stock prices) and
check for weak stationarity by examining the mean and autocovariance. Implement
the Dickey-Fuller test to determine whether the time series is stationary.

To check for weak stationarity in a time series dataset, we can examine whether the
mean and autocovariance remain stable over time. Additionally, we can apply the
Dickey-Fuller test to formally assess stationarity. In this example, let's use Python with
the pandas, matplotlib, and stats models libraries to analyse a time series dataset.

The Dickey-Fuller test checks for the presence of a unit root in the series, where the null
hypothesis (H0) is that the time series is non-stationary. If the test statistic is less than
the critical value, we can reject the null hypothesis and conclude that the series is
stationary.

Let's work with an example dataset of daily temperature or stock prices. For
demonstration, I'll use a mock example of daily stock prices, but you can easily replace
this with any other time series dataset.

Code:

import numpy as np

import pandas as pd

import matplotlib.pyplot as plt

from statsmodels.tsa.stattools import adfuller

# Load or generate a sample time series dataset (e.g., daily stock prices)

# For demonstration, we'll generate synthetic data with a slight trend

np.random.seed(42)

n = 365 # Number of days (1 year of daily data)

dates = pd.date_range(start="2023-01-01", periods=n, freq="D")

data = 100 + np.cumsum(np.random.normal(0, 1, n)) # Synthetic stock price with drift

time_series = pd.Series(data, index=dates)

# Plot the time series

plt.figure(figsize=(12, 6))

plt.plot(time_series, label='Stock Prices')

plt.title("Daily Stock Prices")

plt.xlabel("Date")

plt.ylabel("Price")

plt.legend()

plt.show()
# Calculate and plot rolling mean and rolling variance

rolling_mean = time_series.rolling(window=30).mean()

rolling_std = time_series.rolling(window=30).std()

plt.figure(figsize=(12, 6))

plt.plot(time_series, label='Stock Prices')

plt.plot(rolling_mean, color='red', label='Rolling Mean (30 days)')

plt.plot(rolling_std, color='green', label='Rolling Std (30 days)')

plt.title("Rolling Mean and Rolling Standard Deviation")

plt.xlabel("Date")

plt.ylabel("Price")

plt.legend()

plt.show()

# Conduct the Dickey-Fuller test

result = adfuller(time_series)

print("Dickey-Fuller Test Results:")

print(f"Test Statistic: {result[0]}")

print(f"P-value: {result[1]}")

print("Critical Values:")

for key, value in result[4].items():

print(f" {key}: {value}")

# Interpret the result

if result[1] < 0.05:

print("\nThe p-value is less than 0.05, so we reject the null hypothesis.")

print("The time series is stationary.")

else:

print("\nThe p-value is greater than 0.05, so we fail to reject the null hypothesis.")

print("The time series is non-stationary.")


Explanation of the Code

1. Generate or Load the Time Series Data: We create a synthetic time series with
a slight trend for illustration. Replace this with actual time series data (e.g., stock
prices or temperature) as needed.

2. Plotting the Time Series: This initial plot helps visualize any trend or seasonal
pattern.

3. Rolling Mean and Rolling Variance:


o We compute the rolling mean and rolling standard deviation over a 30-day
window to see if the mean and variance are stable.

o A stationary series should have relatively constant rolling mean and


standard deviation.

4. Dickey-Fuller Test:

o We use adfuller from statsmodels.tsa.stattools to perform the test.

o The test statistic, p-value, and critical values are output, allowing us to
determine if the series is stationary.

o If the p-value is below 0.05, we reject the null hypothesis and conclude
that the series is stationary.

Interpreting the Results

• If the rolling mean and variance are stable and the Dickey-Fuller test
indicates stationarity (p-value < 0.05), then the series is weakly stationary.

• If the rolling mean and variance are not stable or the Dickey-Fuller test does
not reject the null hypothesis (p-value > 0.05), the series is likely non-
stationary, possibly due to a trend or seasonality.

Q8. Explain why a random walk is not weakly stationary by analyzing its mean and
variance over time.

A random walk is a time series model in which each value is determined by adding a
random "step" to the previous value. Formally, a simple random walk XtX_tXt can be
defined as:

Xt=Xt−1+ϵt
where:

• X0 is the initial value,

• ϵt is a white noise term with mean zero and constant variance σ2,

• ϵt terms are independently and identically distributed.

Why a Random Walk is Not Weakly Stationary

To analyse why a random walk is not weakly stationary, let's examine its mean and
variance over time.

1. Mean of a Random Walk


For weak stationarity, the mean of the process should be constant over time. Let's
calculate the mean of Xt for a random walk:

However, this does not imply that the process has a stable mean over time, as the
values of Xt continue to fluctuate in a manner dependent on the cumulative sum of ϵt,
which is unbounded in the long term.

2. Variance of a Random Walk

A key requirement for weak stationarity is that the variance of the process must be
constant over time. Let's compute the variance of XtX_tXt:

If X0=0 and each ϵi has variance σ2, then:

This shows that the variance of Xt increases linearly with time. As time ttt progresses,
the variance grows indefinitely, violating the requirement for weak stationarity that
variance remains constant.

3. Autocovariance of a Random Walk

The autocovariance function in a weakly stationary process should depend only on the
time difference (lag) between two points. In a random walk, however, the
autocovariance function does not satisfy this condition:

The autocovariance between Xt and Xt+τ increases as t grows, which means it depends
on the specific time t and not just the time lag τ. This dependency violates the
stationarity requirement that the autocovariance depends only on the lag, further
confirming that a random walk is not stationary.

Summary

• Mean: Although the expected mean might seem zero if X0=0, the cumulative
nature of random steps means the series will drift over time, and the mean is not
constant.

• Variance: The variance increases linearly with time, violating the stationarity
condition of a constant variance.
• Autocovariance: The autocovariance depends on the time index t and not just
the lag τ, failing the stationarity condition.

Thus, a random walk is not weakly stationary because it has a time-varying mean and an
unbounded variance, both of which disqualify it from being considered stationary.

Q9. Extend the concept of stationarity to multivariate processes. De- scribe the
conditions under which a multivariate process can be considered stationary.
Implement a simulation for a multivariate weakly stationary process.

In a multivariate process, stationarity extends the concept of weak stationarity from


individual time series to a vector of multiple time series. For a multivariate process to be
considered weakly stationary, each of its individual series must meet certain
stationarity conditions, and the relationships between series must also remain stable
over time.

Conditions for Multivariate Weak Stationarity

Let Xt=[X1,t,X2,t,…,Xn,t]T represent an n-dimensional multivariate process, where each is a


univariate time series. The process Xt is weakly stationary if it meets the following
conditions:

1. Constant Mean Vector: The mean vector E[Xt]=μ does not depend on time t, i.e.,
E[Xt]=μ for all t.

2. Constant Covariance Matrix: The covariance matrix Σ=Cov(Xt) remains constant


over time. This matrix includes variances for each series along the diagonal and
covariances between different series off-diagonal.

3. Time-Invariant Cross-Covariance: The cross-covariance matrix


Γ(τ)=Cov(Xt,Xt+τ) depends only on the lag τ and not on the specific time t. This
condition ensures that correlations between different series remain stable over
time.

Example: Simulation of a Multivariate Weakly Stationary Process

To simulate a weakly stationary multivariate process, we can define a multivariate


normal process with a constant mean vector and a covariance structure that satisfies
weak stationarity conditions.

Code:

import numpy as np

import matplotlib.pyplot as plt


from scipy.linalg import toeplitz

import seaborn as sns

# Parameters

n = 100 # Number of time points

mean_vector = np.array([0, 0]) # Mean of the process for each series

variance_x = 1.0 # Variance of X series

variance_y = 1.0 # Variance of Y series

correlation = 0.8 # Correlation between X and Y

tau_max = 20 # Maximum lag for autocovariance plot

# Autocovariance function for each series

def autocovariance_function(tau, variance):

# Exponential decay as a simple example

return variance * np.exp(-np.abs(tau) / 5)

# Cross-covariance function between X and Y series

def cross_covariance_function(tau):

# Cross-covariance also decays exponentially based on the correlation

return correlation * np.sqrt(variance_x * variance_y) * np.exp(-np.abs(tau) / 5)

# Construct the covariance matrix for the multivariate process

cov_matrix = np.zeros((2 * n, 2 * n))

for i in range(n):

for j in range(n):

cov_matrix[i, j] = autocovariance_function(abs(i - j), variance_x) # X autocovariance

cov_matrix[n + i, n + j] = autocovariance_function(abs(i - j), variance_y) # Y


autocovariance

cov_matrix[i, n + j] = cross_covariance_function(abs(i - j)) # Cross-covariance


cov_matrix[n + j, i] = cross_covariance_function(abs(i - j)) # Cross-covariance
(symmetric)

# Generate a sample from the multivariate Gaussian process

data = np.random.multivariate_normal(np.tile(mean_vector, n), cov_matrix)

x_series = data[:n]

y_series = data[n:]

# Plot the generated multivariate time series

plt.figure(figsize=(14, 6))

plt.plot(x_series, label='X Series')

plt.plot(y_series, label='Y Series')

plt.title("Simulated Weakly Stationary Multivariate Process")

plt.xlabel("Time")

plt.ylabel("Value")

plt.legend()

plt.show()

# Plot the cross-covariance function between X and Y series

lags = np.arange(tau_max)

cross_covariances = [np.cov(x_series[:-lag], y_series[lag:])[0, 1] for lag in lags]

plt.figure(figsize=(12, 6))

plt.stem(lags, cross_covariances, use_line_collection=True)

plt.title("Cross-Covariance Function between X and Y Series")

plt.xlabel("Lag")

plt.ylabel("Cross-Covariance")

plt.show()
1. Define Parameters: Set the means, variances, and correlation between the two
series.

2. Autocovariance and Cross-Covariance Functions:

o Use an exponential decay function to simulate a weakly stationary


covariance structure.

o The cross-covariance between the series is scaled by the specified


correlation.

3. Covariance Matrix Construction:


o Create a covariance matrix that includes the autocovariances for each
series along the diagonal blocks and cross-covariances in the off-
diagonal blocks.

o This ensures the resulting data satisfies the weak stationarity conditions.

4. Generate Data: Use np.random.multivariate_normal to generate the time series


data from the multivariate Gaussian distribution.

5. Plot the Simulated Data: Visualize each series to confirm the stationarity of the
multivariate process.

6. Cross-Covariance Plot: Calculate and plot the cross-covariance between the


two series for different lags to confirm the time-invariant correlation structure.

This code simulates a weakly stationary multivariate process and provides a visual
confirmation of the weak stationarity through its mean and cross-covariance properties.

You might also like