0% found this document useful (0 votes)
1 views

Mixed Poisson Process With Max-U-Exp Mixing Variab

Uploaded by

Marshal juliya U
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

Mixed Poisson Process With Max-U-Exp Mixing Variab

Uploaded by

Marshal juliya U
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Mixed Poisson process with Max-U-Exp mixing

variable - Working version


arXiv:2307.09798v1 [math.PR] 19 Jul 2023

Pavlina K. Jordanova
Faculty of Mathematics and Informatics, Konstantin Preslavsky University of Shumen,
115 ”Universitetska” str., 9712 Shumen, Bulgaria.

Corresponding author: pavlina [email protected].

Evelina Veleva
Department of Applied mathematics and Statistics, ”Angel Kanchev” University of Ruse,

Bulgaria.

Abstract
This work defines and investigates the properties of the Max-U-Exp distri-
bution. The method of moments is applied in order to estimate its parameters.
Then, by using the previous general theory about Mixed Poisson processes, de-
veloped by Grandel (1997), and Karlis and Xekalaki (2005), and analogously to
Jordanova et al. (2023), and Jordanova and Stehlik (2017) we define and inves-
tigate the properties of the new random vectors and random variables, which
are related with this particular case of a Mixed Poisson process. Exp-Max-U-
Exp distribution is defined and thoroughly investigated. It arises in a natural
way as a distribution of the inter-arrival times in the Mixed Poisson process
with Max-U-Exp mixing variable. The distribution of the renewal moments is
called Erlang-Max-U-Exp and is defined via its probability density function.
Investigation of its properties follows. Finally, the corresponding Mixed Pois-
son process with Max-U-Exp mixing variable is defined. Its finite dimensional
and conditional distributions are found and their numerical characteristics are
determined.

1 INTRODUCTION
The total set of probability distributions and random processes is uncountable, there-
fore, when introduce and investigate them it is desirable to show the connections

1
between them. One way to make this, is to start with some random process, and
to obtain all random distributions of the stochastic elements which describe it. In
1997 Grandel [1] summarised and developed the general theory of Mixed Poisson pro-
cesses and present some of their potential applications. Later on, in 2005, Karlis and
Xekalaki[2] make a very good review of the investigations of many particular cases
of such processes and obtain some of their new properties and multivariate versions.
Analogously, in 2017 Jordanova and Stehlik [4] study the case when the mixing vari-
able is Pareto distributed and define the distributions which describe, the univariate
and multivariate distributions processes related with this case, the distribution of the
inter-arrival times, the moment of the n-the event and so forth. In 2023 Jordanova
et al. [3] consider the very useful and general case when the mixing variable is Stacy
distributed. Here we define a new Max-U-Exp distribution and investigate its prop-
erties. The method of moments is applied in order to estimate its parameters. Then,
by using the previous general theory about Mixed Poisson processes we define and
investigate the properties of the new random vectors and random variables, which are
related to this particular case of a Mixed Poisson process. Exp-Max-U-Exp distribu-
tion is defined and thoroughly investigated. It arises in a natural way as a distribution
of the inter-arrival times in the Mixed Poisson process with Max-U-Exp mixing vari-
able. The distribution of the renewal moments is called Erlang-Max-U-Exp and is
defined via its probability density function. Investigation of its properties follows.
Finally, the corresponding Mixed Poisson process with Max-U-Exp mixing variable
is defined. Its finite dimensional and conditional distributions are found and their
numerical characteristics are determined.
Along this work we denote by ξ ∈ Bi(n, p) the fact that a random variable (r.v.)
ξ belongs to the set Rof Binomial distributions with parameters n ∈ N, and p ∈ (0, 1).

As usuallyR Γ(α) = 0 xα−1 e−x dx is the notation for the Euler’s Gamma function.
∞ α−1 −y
Γ(α, x) = x y e dy, and γ(α, x) = Γ(α) − Γ(α, x) are correspondingly the upper,
and the loweer incomplete Gamma functions. Fξ (x) is the cumulative distribution
function (c.d.f.) of the r.v. ξ and Pξ (x) is for its probability density function (p.d.f.).

2 MAX-U-EXP DISTRIBUTION
Definition 1. We say that the r.v. ξ is Max-U-Exp distributed with parameters
a > 0 and λ > 0, if it has a cumulative distribution function (c.d.f.)

 0 , x≤0
x −λx
Fξ (x) = (1 − e ) , x ∈ (0, a] . (1)
 a −λx
1−e , x>a

Briefly we will denote this in this way ξ ∈ M ax − U − Exp(a; λ).


Proposition 1.

2
a) ξ ∈ M ax − U − Exp(a; λ) if and only if the probability density function of ξ is

 0 , x≤0
1 −λx −λx
Pξ (x) = (1 − e + xλe ) , x ∈ (0, a] . (2)
 a −λx
λe , x>a

b) (Scaling property) If ξ ∈ M ax − U − Exp(a; λ) and k > 0 is a constant, then


 
λ
kξ ∈ M ax − U − Exp ka; .
k

c) If ξ ∈ M ax − U − Exp(a; λ), the hazard rate function of this distribution is



 0 , x≤0
1−e−λx +xλe−λx
hξ (x) = , x ∈ (0, a]
 a−x+xe−λx
λ , x>a

Proof: a) is an immediate corollary of the relation between c.d.f. and probability


density function (p.d.f.).
b) For k > 0 and x > 0, by using a) we obtain

x≤0

0 ,
1  x  
1 −λ x λ − λ
x
Pkξ (x) = Pξ = (1 − e k + x k e k ) , x ∈ (0, ka] .
k k  ak λ −λ
k
e kx , x > ka

The rest follows by the uniqueness of the correspondence between p.d.f. and the
distribution and formula (2).
P (x)
c) follows by the definition for hazard rate function hξ (x) = 1−Fξ ξ (x) , Definition 1,
and Proposition 1, a). □
In the next theorem and further on, we denote by U (0, a) the Uniform distribution
on the interval (0, a), and by Exp(λ) the Exponential distribution with mean λ1 , λ > 0.
Theorem 1. Let θ ∈ U (0, a), η ∈ Exp(λ) and θ and η be independent. Denote
by ξ := max(θ, η). Then,

a) ξ ∈ M ax − U − Exp(a; λ);

b) The mean, and the moments of ξ are correspondingly Eξ = a2 + aλ1 2 (1 − e−λa ),


and
k ak k k
E(ξ ) = + k+1 γ(k + 1, aλ) + k Γ(k, λa), k > −1.
k + 1 aλ λ
a2
c) The variance of ξ is Dξ = 12
− 1
λ2
(1 + e−λa ) + 4
aλ3
(1 − e−λa ) − 1
a2 λ4
(1 − e−λa )2 .

3
d) The Laplace-Stieltjes transform of ξ is
1 1
E(e−ξt ) = (1 − e−λa ) − (1 − e−(λ+t)a − λae−(λ+t)a )
at a(λ + t)
λ
+ (1 − e−(λ+t)a − (λ + t)e−(λ+t)a ).
a(λ + t)2

Proof: a) Consider x ∈ R, the definition of ξ and the independence between θ and


η entail,

Fξ (x) = P(max(θ, η) ≤ x) = P(θ ≤ x, η ≤ x) = P(θ ≤ x))P(η ≤ x)

Now by using the definitions of Exp(λ) and U (0, a) distributions we obtain the c.d.f.
(1). The rest follows by the uniqueness of the correspondence between the c.d.f. and
the probability law of the considered random variable (r.v.).
b) follows by the definition of the mathematical expectation, initial moments, and
(2).
c) follows by the formula Dξ = E(ξ 2 ) − (Eξ)2 , the definition of the second initial
moment of a r.v., and (2).
d) is a corollary of the definition for Laplace-Stieltjes transform of a r.v., and (2).

Let us now use the method of moments, and to obtain the algorithm for estimation
of the parameters of this distribution. Suppose we have a sample of n independent
observations on a r.v. ξ. Let us denote by mk the k-th empirical initial moment of
ξ computed by using a sample of n independent observations on a r.v. ξ ∈ M ax −
U − Exp(a; λ). Then, it is well-known that m1 , and m2 are unbiased and consistent
estimations correspondingly for Eξ, and E(ξ 2 ), while an unbiased estimation for (Eξ)2
nm21 −m2
is n−1 . By Theorem 1, we obtain that the first two initial moments are the
following functions of the unknown parameters a and λ:
 2 
a 1 −λa 1 x −x

Eξ = 2 + aλ2 1 − e = xy 2 + 1 − e
2
h 3 i
E(ξ 2 ) = a3 + aλ4 3 (1 − e−aλ ) − λ22 e−λa = xy1 2 x3 + 4 − 2e−x (x + 2)

where we have used the notations x = aλ, y = λ. Equivalently,


h 3 i  2 2
Eξ 2
x x3 + 4 − 2e−x (x + 2) : x2 + 1 − e−x = (Eξ) 2
 2  (3)
x −x
y = 2 +1−e : (xEξ)

The first equation of system (3) is nonlinear, depending only on the unknown x = aλ.
Its solution can be found numerically by replacing its right-hand side with an estimate
2) m2 (n−1)
r̂ for the ratio E(ξ
(Eξ)2
calculated from the sample. Such estimate can be m
m2
2
or nm 2 −m ,
2
1 1

4
m2 m2 (n−1)
m21
> nm 2 −m . After finding x, we determine the unknown y = λ, estimating Eξ with
2
1
m1 from the second equation of the system (3). Finally, we find a = xy . The graph of
the left-hand side of the first equation of system (3) as a function of x = aλ is shown
in Figure 1. We can see that when

Figure 1:

the estimator r̂ ∈ 43 , 2 , the system (3) will have a unique solution. It can easily
 
2)
be checked that as x = aλ tends to infinity the ratio E(ξ (Eξ)2
will tend to 43 = 1.3333.
However, to each r̂ in the interval [1.2452, 1.3333] we have two possible values of aλ
on the graph of the left side of the first equation, i.e. two possible solutions. The
2)
constant 1.2452 is the minimum value that the ratio E(ξ (Eξ)2
can take. It is reached
when aλ = 4.0232. In the simulations made, sometimes r̂ took values even less than
1.2452. In such a case, the system (3) will not have a solution, or we can assume that
aλ = 4.0232 as the value that minimizes the square of the difference between the left
and right sides of the first equation.
When r̂ < 1.3333 we could conclude that aλ > 2.1738 (the x-coordinate of the
corresponding value on the graph) and use another approach to estimate aλ, for
example the least squares method to compare the empirical with the theoretical
distribution functions. In detail, this approach is described for instance in [5] for
parameter estimation in the Generalized exponential distribution. From (1) we have
that P(ξ > a) = 1 − Fξ (a) = e−λa , and for aλ > 2.1738, P(ξ > a) will be less than
0.1137, that is, a relatively small percentage of the observations in the sample will
be greater than the parameter a. With probability close to one no more than 25%
of the observations will be greater than a. For example, if n = 20 and aλ = 2.2,
P(ξ > a) = e−2.2 = 0.11, and the probability that no more than 25% of the observa-
tions are greater than a is
5
X
i
P (X ≤ 5) = C20 0.11i 0.8920−i = 0.98,
i=0

5
where X ∈ Bi(20, 0.11). For aλ > 2.1738 we can simply remove the largest 25%
of the observations and with the remaining k observations x1 , x2 , . . . , xk form and
minimize with respect to the parameters a and λ the sum
k  2 Xk 
 2

X i i xi −λxi
− Fξ (xi ) = − 1−e . (4)
i=1
n+1 i=1
n+1 a

Alternatively, a lower initial estimate â for the parameter a can be determined


using the histogram of the sample. Since the density function of the distribution will
always have a discontinuity at point a, from a given location â onwards the heights
of the bars in the histogram will drop sharply, decreasing exponentially to 0. For
larger values of the product aλ, the exponential tail will not even be present in the
histogram at all, and all observations will be in the interval (0, â). Then, we determine
the number k of elements in the sample that are smaller than â and with their help
we form and minimize the sum (4) with respect to the parameters a and λ.

3 EXP-MAX-U-EXP AND ERLANG-MAX-U-EXP


DISTRIBUTIONS
Definition 2. We say that the r.v. τ is Exp-Max-U-Exp distributed with pa-
rameters a > 0 and λ > 0, if it has a p.d.f.

0 , t≤0
Pτ (t) = 1 −at −at λ−t −a(λ+t) t −a(λ+t) .
at2
(1 − e − ate ) + a(λ+t)3 (1 − e ) + (λ+t)2 e , t>0
(5)
R ∞τ ∈ Exp − M ax − U − Exp(a; λ).
Briefly we will denote this in this way
This distribution is proper as far as 0 Pτ (t)dt = 1.
The proof of the following result is based on the correspondence between c.d.f.,
p.d.f. and the probability distribution.
Proposition 2. For a > 0 and λ > 0, τ ∈ Exp − M ax − U − Exp(a; λ) if and
only if the c.d.f.
(
0 , t≤0
Fτ (t) = 1−e−at t −a(λ+t) . (6)
1 − at + a(λ+t)2 (1 − e ) , t>0

Definition 3. We say that the random vector (rv.) (τ, ξ) has bivariate Exp-
Max-U-Exp distribution of I −st kind with parameters a > 0, and λ > 0, if it has
a joint p.d.f.

 0 , x≤0∪t≤0
xe−tx −λx −λx
Pτ,ξ (t, x) = (1 − e + xλe ) , x ∈ (0, a], t > 0 . (7)
 a
λxe−(λ+t)x , x > a, t > 0

6
Briefly we will denote this in this way (τ, ξ) ∈ Exp − M ax − U − Exp − I −st (a, λ).
Theorem 2. For a > 0 and λ > 0, if ξ ∈ M ax − U − Exp(a; λ) and for x > 0,
(τ |ξ = x) ∈ Exp(x), then:
a) τ ∈ Exp − M ax − U − Exp(a, λ);
d
b) τ = ηξ , where η ∈ Exp(1), and ξ and η are independent.

c) For p ∈ (0, 1),


λp−1
 
p 1 1−p −λa

E(τ ) = Γ(p+1) + (p + aλ)Γ(1 − p, λa) + (λa) e − pΓ(1 − p) .
ap (1 − p) a
For p ≥ 1, E(τ p ) = ∞.
−st
  of τ and ξ is (τ, ξ) ∈ Exp − M ax − U − Exp − I (a, λ)
d) The joint distribution
d
and (τ, ξ) = ηξ , ξ , where η ∈ Exp(1), and ξ and η are independent.

e) For all t > 0, Pξ (x|τ = t) = 0, x ≤ 0,


Pξ (x|τ = t) =
xt2 (λ + t)3 e−tx (1 − e−λx + λxe−λx )
, x ∈ (0, a],
(λ + t)3 (1 − e−at − atλe−at ) + t2 (λ − t)(1 − e−a(λ+t) ) + a(λ + t)t3 e−a(λ+t)
Pξ (x|τ = t) =
aλxt2 (λ + t)3 e−(λ+t)x
, x > a.
(λ + t)3 (1 − e−at − atλe−at ) + t2 (λ − t)(1 − e−a(λ+t) ) + a(λ + t)t3 e−a(λ+t)

f ) The mean square regression function is E(τ |ξ = x) = x1 , x > 0.


g) For t > 0, the mean square regression function is E(ξ|τ = t)
e−at 2eat (1 − 6λt3 ) − (at + 1)2 − 1 + e−aλ t3 ((a(λ + t) + 1)2 + λa2 (λ + t) − 4aλ + 1)

= .
t(λ + t)3 (1 − e−at − atλe−at ) + t3 (λ − t)(1 − e−a(λ+t) ) + a(λ + t)t4 e−a(λ+t)

Proof: a) For t > 0, by the integral form of the Total probability formula and
(2) we obtain
Z ∞ Z a Z ∞
x −xt −λx −λx
Pτ (t) = Pτ (t|ξ = x)Pξ (x)dx = e (1 − e + λxe )dx + λxe−(λ+t)x )dx
0 0 a a
 
1 −at −at 1 2λ
− 1 (1 − e−a(λ+t) − a(λ + t)e−a(λ+t) )

= 1 − e − ate +
at2 a(λ + t)2 λ + t
λ
+ e−a(λ+t)
(λ + t)2
1 −at −at λ−t t
(1 − e−a(λ+t) ) + e−a(λ+t) .

= 2
1 − e − ate + 3 2
at a(λ + t) (λ + t)

7
Now, we compare it with (6) and complete the proof of a).
b) For t > 0, by the integral form of the Total probability formula we obtain
Z ∞ Z ∞
P ηξ (t) = P ηξ (t|ξ = x)Pξ (x)dx = P xη (t)Pξ (x)dx
0 0
Z ∞ Z a Z ∞
−xt 1 −λx −λx
= xPη (tx)Pξ (x)dx = xe (1 − e + xλe )dx + xλe−x(λ+t) dx.
0 0 a a

The integrals are the same as in a), which means that for all t > 0, Pτ (t) = P ηξ (t).
The rest follows by the uniqueness of the correspondence between p.d.f. and the
probability law.
c) In order to obtain these moments we apply the Double expectation formula,
and the formula for the moments of the exponential distribution.
d) follows by the formula Pτ,ξ (t, x) = Pτ (t|ξ = x)Pξ (x), when we replace the p.d.f.
of the exponential distribution, use its scaling property, and (2).
e) can be proved by the Bayes’ formula for the densities, (2), (5), and the p.d.f.
of the Exponential distribution.
f) follows by the expectation of the Exponential distribution.
g) follows by the formula for the expectation, and e). □
Definition 4. We say that the rv. (τ1 , τ2 , ..., τk ) has Multivatiate Exp-Max-
U-Exp distribution of II −nd kind with parameters a > 0, and λ > 0, if it has
a joint p.d.f.
Pτ1 ,τ2 ,...,τk (t1 , t2 , . . . , tk )
γ(k + 1, a(t1 + t2 + . . . + tk )) γ(k + 1, a(t1 + . . . + tk + λ))
= + (λk − t1 − . . . − tk )
a(t1 + . . . + tk )k+1 a(t1 + . . . + tk + λ)k+2
Γ(k, a(t1 + . . . + tk + λ))
+λk , t1 > 0, t2 > 0, . . . , tk > 0,
(λ + t1 + . . . tk )k+1
and Pτ1 ,τ2 ,...,τk (t1 , t2 , . . . , tk ) = 0, otherwise.
Briefly we will denote this in this way (τ1 , τ2 , . . . , τk ) ∈ Exp − M ax − U − Exp −
II(a, λ).
Definition 5. We say that the r.v. Tn is Erlang-Max-U-Exp distributed
with parameters n ∈ N, a > 0, and λ > 0, if it has a p.d.f.
tn−1
 
γ(n + 1, at) γ(n + 1, a(λ + t)) Γ(n, a(λ + t))
PTn (t) = + (λn − t)) + λna ,
a(n − 1)! tn+1 (λ + t)n+2 (λ + t)n+1
when t > 0, and PTn (t) = 0, otherwise. Briefly, we will denote this in this way
Tn ∈ Erlang − M ax − U − Exp(n; a, λ).
Theorem 3. For a > 0, and λ > 0, if ξ ∈ M ax − U − Exp(a; λ) and for x > 0,
(τ1 , τ2 , ..., τk |ξ = x) are independent identically Exp(x) distributed r.vs., then,

a) (τ1 , τ2 , . . . , τk ) ∈ Exp − M ax − U − Exp − II(a, λ).

8
b) For i = 1, 2, ..., k, τi ∈ Exp − M ax − U − Exp − (a, λ).
 
d
c) (τ1 , τ2 , . . . , τk ) = ηξ1 , ηξ2 , . . . , ηξk , where η1 , η2 , . . . , ηk are independent identically
distributed (i.i.d.) Exp(1), and independent on ξ.
dη1 +η2 +...+ηn
d) Tn := τ1 + . . . + τn ∈ Erlang − M ax − U − Exp(n; a, λ). Tn = ξ
,
d
where η1 , η2 , . . . , ηn are i.i.d. Exp(1), and independent on ξ. Tn = θξn , where
θn ∈ Gamma(n, 1) is independent on ξ.
e) For p ∈ (0, 1),
E(Tnp )
λp−1
 
Γ(p + n) 1 1−p −λa

= + (p + aλ)Γ(1 − p, λa) + (λa) e − pΓ(1 − p) .
(n − 1)!λp ap (1 − p) a
For p ≥ 1, E(Tnp ) = ∞.
Proof: a) For t1 > 0, t2 > 0, . . . , tk > 0 the integral form of the Total probability
formula, and (2) entail
Pτ1 ,τ2 ,...,τk (t1 , t2 , . . . , tk )
Z ∞
= Pτ1 ,τ2 ,...,τk (t1 , t2 , . . . , tk |ξ = x)Pξ (x)dx
0
Z ∞
1 a k −x(t1 +t2 +...+tk )
Z
−λx −λx
= x e (1 − e + xλe )dx + λ xk e−x(t1 +t2 +...+tk +λ) dx
a 0 a
γ(k + 1, a(t1 + t2 + . . . + tk )) γ(k + 1, a(t1 + . . . + tk + λ))
= + (λk − t1 − . . . − tk )
a(t1 + . . . + tk )k+1 a(t1 + . . . + tk + λ)k+2
Γ(k, a(t1 + . . . + tk + λ))
+ λk
(λ + t1 + . . . tk )k+1
Otherwise Pτ1 ,τ2 ,...,τk (t1 , t2 , . . . , tk ) = 0. Now, we compare the last expression with
Definition 4 and complete the proof of this point.
b) By condition (τ1 , τ2 , ..., τk |ξ = x) are i.i.d., therefore, for any fixed i = 1, 2, ..., k
we just can apply Theorem 1, a) and obtain immediately that τi ∈ Exp − M ax − U −
Exp − (a, λ).
c) Consider t1 > 0, t2 > 0, . . . , tk > 0. analogously to the proof of a) we obtain
the same expression as in a),
P η1 , η2 ,..., ηk (t1 , t2 , . . . , tk )
ξ ξ ξ
Z ∞
= P η1 , η2 ,..., ηk (t1 , t2 , . . . , tk |ξ = λ)Pξ (λ)dλ
λ λ λ
0
Z ∞
= Pη1 ,η2 ,...,ηk (λt1 , λt2 , . . . , λtk )λk Pξ (λ)dλ
0
Z ∞
1 a k −x(t1 +t2 +...+tk )
Z
−λx −λx
= x e (1 − e + xλe )dx + λ xk e−x(t1 +t2 +...+tk +λ) dx.
a 0 a

9
Otherwise Pτ1 ,τ2 ,...,τk (t1 , t2 , . . . , tk ) = 0. The uniqueness of the correspondence between
the p.d.f. and the probability distribution, together with Definition 4 complete the
proof.
d) follows by the integral form of the Total probability formula, and the rela-
tion between the Erlang and Exponential distribution. The relation between Erlang,
Exponential and Gamma distributions completes the proof.
e) Consider p ∈ (0, 1). By d) and the independence of η1 , η2 , . . . , ηk and ξ we have
E(Tnp )
 p     
η1 + η2 + . . . + ηn p 1 Γ(p + n) 1
=E = E((η1 +η2 +. . .+ηn ) )E p = p
E p ,
ξ ξ (n − 1)!λ ξ

where in the last equality we have used the well-known formula for the moments of
η1 + η2 + . . . + ηn ∈ Gamma(n, λ).
Now, we use the definition for expectation together with (2) and compute

λp−1
 
1 1
(p + aλ)Γ(1 − p, λa) + (λa)1−p e−λa − pΓ(1 − p) ,

E p = p +
ξ a (1 − p) a

which completes the proof. □

4 THE MIXED POISSON-MAX-U-EXP PROCESS


Definition 6. A r.v. θ has a Mixed Poisson-Max-U-Exp distributed with
parameters a > 0, and λ > 0 if for n = 0, 1, . . . ,
 
1 γ(n + 1, a) γ(n + 1, a(λ + 1)) λΓ(n, a(λ + 1))
P(θ = n) = + (nλ − 1) + n .
n! a a(λ + 1)n+2 (λ + 1)n+1
(8)
Briefly, θ ∈ M P M ax − U − Exp(a, λ).
Definition 7. Let µ(t) : [0, ∞) → [0, ∞) be a nonnegative, strictly increasing and
continuous function, µ(0) = 0, ξ ∈ M ax − U − Exp(a; λ) and N1 be a Homogeneous
Poisson process (HPP) with intensity 1, independent on ξ. We call the random process

N := {N (t), t ≥ 0} = {N1 (ξµ(t)), t ≥ 0} (9)

a Mixed Poisson process with Max-U-Exp mixing variable or MPMax-U-


Exp process. Briefly N ∈ M P M ax − U − Exp(a, λ; µ(t)).
Definition 8. Let n ∈ N. We say that a random vector (N1 , N2 , . . . , Nn ) is
Ordered Poisson-Max-U-Exp distributed with parameters a > 0, λ > 0,
and 0 < µ1 < µ2 < ... < µn if, for all integers 0 ≤ k1 ≤ k2 ≤ . . . ≤ kn ,

10
P(N1 = k1 , N2 = k2 , . . . , Nn = kn )

µk1 (µ2 − µ1 )k2 −k1 . . . (µn − µn−1 )kn −kn−1



γ(kn + 1, aµn )
= 1
ak1 !(k2 − k1 )! . . . (kn − kn−1 )! µknn +1

γ(kn + 1, a(λ + µn )) Γ(kn , a(λ + µn ))
+ (λkn − µn ) + λakn ,
(λ + µn )kn +2 (λ + µn )kn +1
and P(N1 = k1 , N2 = k2 , . . . , Nn = kn ) = 0, otherwise. Briefly,

(N1 , N2 , . . . , Nn ) ∈ OP M U E (a, λ; µ1 , µ2 , ..., µn ).

Definition 9. Let n ∈ N. We say that a random vector (N1 , N2 , . . . , Nn ) is


Mixed Poisson-Max-U-Exp distributed with parameters a > 0, λ > 0, and
0 < µ1 < µ2 < ... < µn if, for all m1 , m2 , . . . , mn ∈ {0, 1, . . .},
P(N1 = m1 , N2 = m2 , . . . , Nn = mn )

µm m2
. . . (µn − µn−1 )mn γ(m1 + . . . + mn + 1, aµn )

1 (µ2 − µ1 )
1

=
am1 !m2 ! . . . mn ! µm n
1 +...+mn +1

γ(m1 + . . . + mn + 1, a(λ + µn ))
+ (λ(m1 + . . . + mn ) − µn )
(λ + µn )m1 +...+mn +2

Γ(m1 + . . . + mn , a(λ + µn ))
+λa(m1 + . . . + mn ) ,
(λ + µn )m1 +...+mn +1
and P(N1 = m1 , N2 = m2 , . . . , Nn = mn ) = 0, otherwise. Briefly,

(N1 , N2 , . . . , Nn ) ∈ MP M U E (a, λ; µ1 , µ2 , ..., µn ).

The next statements are analogous to corresponding one in [3] and [4].
Proposition 3. If (N1 , N2 , . . . , Nn ) ∈ OP M U E (a, λ; µ1 , µ2 , ..., µn ), then

(N1 , N2 − N1 , . . . , Nn − Nn−1 ) ∈ MP M U E (a, λ; µ1 , µ2 , ..., µn ).

Proposition 4. If (N1 , N2 , . . . , Nn ) ∈ MP M U E (a, λ; µ1 , µ2 , ..., µn ), then

(N1 , N1 + N2 , . . . , N1 + N2 + . . . + Nn ) ∈ OP M U E (a, λ; µ1 , µ2 , ..., µn ).

In the next theorem we investigate the main properties of MPMax-U-Exp process


N , defined in (9).
Theorem 4. Let a > 0, λ > 0, and µ(t) : [0, ∞) → [0, ∞) be a nonnegative,
strictly increasing and continuous function, and {N (t), t ≥ 0} ∈ M P M ax − U −
Exp(a, λ; µ(t)).
λ
a) For all t > 0, N (t) ∈ M P M ax − U − Exp(aµ(t), µ(t) ).

11
b) These processes are over-dispersed,

a µ(t)
EN (t) = µ(t) + (1 − e−λa ),
2 aλ2
 2
a µ(t) −λa a 1
DN (t) = µ(t) + 2
2
(1 − e ) + µ (t) − 2 (1 + e−λa )
2 aλ 12 λ

4 −λa 1 −λa 2
+ 3 (1 − e ) + 2 4 (1 − e ) .
aλ aλ

c) The probability generating function (p.g.f.) of the time intersections is


1
E(z N (t) ) = (1 − e−λa )
aµ(t)(1 − z)
1
− (1 − e−(λ+µ(t)(1−z))a − λae−(λ+µ(t)(1−z))a )
a(λ + µ(t)(1 − z))
λ
+ (1−e−(λ+µ(t)(1−z))a −(λ+µ(t)(1−z))e−(λ+µ(t)(1−z))a ), |z| < 1.
a(λ + µ(t)(1 − z))2

d) For t > 0, and n = 0, 1, . . ., Pξ (x|N (t) = n) = 0, when x ≤ 0,


Pξ (x|N (t) = n)

(µ(t))n+1 xn e−µ(t)x (1 − e−λx + xλe−λx )


= γ(n+1,aµ(t)) µn (t)γ(n+1,a(λ+µ(t))) n (t)Γ(n,a(λ+µ(t)) , x ∈ (0, a],
µ(t)
+ (λ+µ(t))n+2
(nλ − µ(t)) + na λµ (λ+µ(t))n+1

Pξ (x|N (t) = n)

a(µ(t))n+1 xn λe−x(µ(t)−λ)
= γ(n+1,aµ(t)) µn (t)γ(n+1,a(λ+µ(t))) n (t)Γ(n,a(λ+µ(t)) , x > a.
µ(t)
+ (λ+µ(t))n+2
(nλ − µ(t)) + na λµ (λ+µ(t))n+1

e) For t > 0, and n = 0, 1, . . ., the mean square regression is


E(ξ|N (t) = n)
γ(n+2,aµ(t))
aµn+2 (t)
+ γ(n+2,a(µ(t)+λ))
a(µ(t)+λ)n+3
λ(n+1)
(λ(n + 1) − µ(t)) + (λ+µ(t)) n+2 Γ(n + 1, a(λ + µ(t)))
= γ(n+1,aµ(t)) γ(n+1,a(µ(t)+λ)) λn
.
aµ n+1 (t)
+ a(µ(t)+λ) n+2 (λn − µ(t)) + (λ+µ(t)) n+1 Γ(n, a(λ + µ(t)))

f ) For all k = 0, 1, . . ., ...,

(aµ(t))k kµk (t) kµk (t)


E[N (t)(N (t)−1)(N (t)−k+1)] = + k+1 γ(k+1, aλ)+ Γ(k, λa).
k+1 aλ λk

12
g) For all n ∈ N, and 0 ≤ t1 ≤ t2 ≤ . . . ≤ tn ,

(N (t1 ), N (t2 ), . . . , N (tn )) ∈ OM P U E (a, λ; µ(t1 ), µ(t2 ), ..., µ(tn )).

h) For all n ∈ N, and 0 ≤ t1 ≤ t2 ≤ . . . ≤ tn ,

(N (t1 ), N (t2 )−N (t1 ), . . . , N (tn )−N (tn−1 )) ∈ MM P U E (a, λ; µ(t1 ), µ(t2 ), ..., µ(tn )).

i) Denote by τ1 , τ2 , . . . the inter-occurrence times of the counting process N . Then,


τ1 , τ2 , . . . are dependent and Exp − M ax − U − Exp(a; λ) distributed.

j) For n ∈ N, if Tn is the moment of occurrence of the n-th event of the counting


process N , then Tn ∈ Erlang − M ax − U − Exp(n; a, λ).

Proof: a) Consider t > 0 and n ∈ N ∪ {0}. By Definition 4 we have that


P(N (t) = n) = P(N1 (ξλ(t)) = n).
The integral form of the Total probability formula, and the independence between
the random process N1 , and the r.v. ξ entail,
Z ∞ Z ∞
P(N (t) = n) = P(N1 (ξµ(t)) = n|ξ = x)Pξ (x)dx = P(N1 (xµ(t)) = n)Pξ (x)dx.
0 0

Now, by using the definition for Poisson distribution and the Definition 2, for
n = 0, 1, . . . we obtain,
P(N (t) = n)
Z ∞
(xµ(t))n −xµ(t)
= e Pξ (x)dx
0 n!
Z a Z ∞
(xµ(t))n −xµ(t) 1 −λx −λx (xµ(t))n −xµ(t) −λx
= e (1 − e + xλe )dx + e λe dx
0 n! a a n!
γ(n + 1, aµ(t)) µn (t)γ(n + 1, a(λ + µ(t))) λµn (t)Γ(n, a(λ + µ(t))
= + (nλ − µ(t)) + n .
aµ(t)n! a(λ + µ(t))n+2 n! (λ + µ(t))n+1 n!
λ
By definition 4, the last expression is exactly the p.m.f. of M P M ax−U −Exp(aµ(t), µ(t) )
distributed r.v. The rest follows by the uniqueness of the correspondence between
the p.m.f. and the probability law of the r.v.
b) follows by the general formulae for the mean and the variance of Mixed Poisson
distribution which could be seen for example in Proposition 2.1.i) and ii) in [1] and
Theorem 1 b) and c).
c) Let z < 1 and t ≥ 0. By the Double expectation formula we have the general
formula for the p.g.f. of a Mixed poisson process, which could be seen for example in
[1] for the case when µ(t) ≡ t, t > 0. It is E(z N (t) ) = E(e−µ(t)ξ(1−z) ). Now, Theorem
1, d), completes the proof of this statement.

13
d) The Bayes rule, a), Definition 4, (2) and the definition for Poisson distribution
entail the desired result.
e) In order to prove this statement we use the definition for the expectation and
d).
f) Remark 2.1, p. 15 in [1] expresses the relation between these factorial moments
and the moments of the mixing variable. Now, we use Theorem 1, b) and complete
the proof of this point.
g) and h) are analogous to the proves of the analogous results in [3] and [4].
i) follows by Definition 4, the properties of the HPP, and Theorem 2, a).
j) follows by Definition 5, the properties of the HPP, and Theorem 2, d). □
Notes: 1. As it is noticed in Grandel [1], in the case µ(t) = t, t > 0, any Mixed
Poisson process is a birth process with transition intensities given by E(ξ|N (t) = n).
The converse is not true.
In the general case for µ = µ(t), t > 0, the transition intensities are analogous,
however, first we need to apply the non-random time-change µ, to the initial birth
process.
2. As far as any Mixed Poisson process with µ(t) = t, t > 0, for any 0 < s < t has
Binomial conditional distributions (N (s)|N (t) = n) ∈ Bi n, st , (see Grandel [1]), p.
98, in the general case for µ we have
 
µ(s)
(N (s)|N (t) = n) ∈ Bi n, .
µ(t)

5 CONCLUSIONS
Mixed Poisson processes represent a generalization of homogeneous ones, allowing the
rate λ to be a random variable. The distribution λ is called the structure distribution
and may be regarded as a prior distribution. This work considers a new structure
distribution, called Max-U-Exp distribution. It is the distribution of the maximum
of two random variables - Uniform and Exponential. Properties of this distribution
are considered and an algorithm for estimating its parameters is developed. It is
based on a combination of the method of moments and least square method. Exp-
Max-U-Exp distribution is defined and thoroughly investigated. It arises in a natural
way as a distribution of the inter-arrival times in the Mixed Poisson process with
Max-U-Exp mixing variable. The distribution of the renewal moments (arrival times)
is called Erlang-Max-U-Exp and is defined via its probability density function. By
using mainly the previous general theory about Mixed Poisson processes, developed
by Grandel [1], and Karlis and Xekalaki [2], and analogously to Jordanova et al. [3],
and Jordanova and Stehlik [4] we define and investigate the properties of the new
random vectors and random variables, which are related with this particular case
of a Mixed Poisson process. The paper shows new explicit relations between the

14
considered random elements. In an analogous way, many different univariate and
multivariate distributions could be defined, and different relations between the new
classes of probability laws could be explained.

6 ACKNOWLEDGMENTS
The work was supported by the Scientific Research Fund in Konstantin Preslavsky
University of Shumen, Bulgaria under Grant Number RD-08-35/18.01.2023 and project
Number 2023 - FNSE – 04, financed by Scientific Research Fund of Ruse University.

References
[1] Grandel, J., Mixed Poisson processes, CRC Press, vol. 77, 1997.

[2] Karlis, D., Xekalaki, E., Mixed Poisson distributions, International Statistical
Review, vol. 73 (1), pp. 35-58 (2005).

[3] Jordanova, P., Savov, M., Tchorbadjieff, A., Stehlik, M., Mixed Poisson Process
with Stady mixing variable, arXiv: 2303.10226.

[4] Jordanova, P., Stehlik, M., Mixed Poisson Process with Pareto mixing variable
and its risk applications, Lithuanian mathematical journal, vol. 56(2), pp. 189-
206 (2016).

[5] Rameshwar, D. G., Debasis, K, Generalized exponential distribution: different


method of estimations, Journal of Statistical Computation and Simulation, vol.
69(4), pp. 315-337 (2001) DOI: 10.1080/00949650108812098.

15

You might also like