TRENDING NEWS

POPULAR NEWS

I Need A Proof Of The Lack Of Memory Property For A Geometric Distribution

Geometric distribution?

So I need help with geometric distribution. I think I have an idea, but I just want to make sure I am doing it the correct way.
A cereal maker places a game piece in each of its cereal boxes. The probability of winning a prize in the game is 1 in 4. Find the probability that you (a) win your prize with your fourth purchase, (b) win your first prize with your first, second, or third purchases, and (c) do not win a prize with your first four purchases.

Memoryless Property of the Geometric Distribution?

X>k is correct, if it were X
*********

For a geometric distribution the probability of a success on the (i+1)th trial is:

P(X=i) = p(1-p)^i (1)

i = 0,1,2,3...

Remember the relation for conditional probability:

P(B|A) = P(A∩B)/P(B) (2)

In your case the probabilities are:

P(B|A) = P( X ≤ k + j | X > k )

P(A∩B) = P( k < X ≤ k + j ) = p(1-p)^(k+1) + p(1-p)^(k+2) + ... + p(1-p)^(k+j)

=> P(A∩B) = P( k < X ≤ k + j ) = (1-p)^(k+1)( p(1-p)^(0) + p(1-p)^(1) + ...+ p(1-p)^(j) )

=> P(A∩B) = P( k < X ≤ k + j ) = (1-p)^(k+1)P( X ≤ j ) (3)

P(B) = P( X > k ) = p(1-p)^(k+1) + p(1-p)^(k+2) + ... (4)

Use (3) and (4) in (2) to give:

P(B|A) = P( X ≤ k + j | X > k ) = (1-p)^(k+1)P( X ≤ j )/ (p(1-p)^(k+1) + p(1-p)^(k+2) + ...)

=>P(B|A) = P( X ≤ k + j | X > k ) = P( X ≤ j ) / (p(1-p)^(0) + p(1-p)^(1) + p(1-p)^2+...) (5)

The denominator here is:

P( X > 0 ) = p(1-p)^(0) + p(1-p)^(1) + p(1-p)^2+... = 1 (6)

Use (6) in (5) to give:

P( X ≤ k + j | X > k ) = P( X ≤ j ) / 1 = P( X ≤ j )

hence your result:

P( X ≤ k + j | X > k ) = P( X ≤ j ) (7)

Questions?

Prove that the geometric distribution has the memoryless property?

We need to show that
Pr(x>s+t | x≥t) = Pr(x>s).
I took this definition from [1], near the end.

As x>s+t implies x≥t, the conditional probability on the left is equal to
Pr(x>s+t | x≥t) = Pr(x>s+t) / Pr(x≥t)
(the denominator will never be zero) and the equation can be written as
Pr(x>s+t) = Pr(x>s) Pr(x≥t).

We will need to compute what Pr(x>n) is for our distribution. I consider it easier to compute
Pr(x≤n) = Σ [k=0 to n] p*(1-p)^k = p * ((1-p)^(n+1) - 1) / ((1-p)-1) = -((1-p)^(n+1) - 1) = 1 - (1-p)^(n+1),
from which it follows that
Pr(x>n) = (1-p)^(n+1).

Thus Pr(x>s+t) = (1-p)^(s+t+1) and Pr(x>s) = (1-p)^(s+1). The last missing term can be computed as
Pr(x≥t) = Pr(x>t) + Pr(x=t) = (1-p)^(t+1) + p*(1-p)^t = ((1-p) + p) * (1-p)^t = (1-p)^t.

Plugging all of this into our equation, we obtain
(1-p)^(s+t+1) = (1-p)^(s+1) * (1-p)^t,
which is true by the properties of powers.

Is hypergeometric distribution memoryless?

The hypergeometric distribution is one of the distributions related to the Bernoulli process.A single trial for a Bernoulli process, called a Bernoulli trial, ends with one of two outcomes, one often called success, the other called failure. Success occurs with probability [math]p[/math] while failure occurs with probability [math]1 − p,[/math] usually denoted [math]q.[/math]A Bernoulli process consists of repeated independent Bernoulli trials with the same parameter [math]p.[/math] These trials form a random sample from the Bernoulli population. A Bernoulli process uses discrete time whereas the related Poisson process uses continuous time.Both processes are memoryless. What has happened in the past has no effect on what will happen in the future.The hypergeometric distribution is used to answer a particular question about Bernoulli processes. Given that there are [math]M[/math] successes among [math]N[/math] trials, if you ask how many of the first [math]n[/math] trials are successes, then the answer will have a Hypergeometric[math](N,M,n)[/math] distribution. In a sense, this is memoryless. What happened before the [math]N[/math] trials has no bearing on the answer. On the other hand, you have to know what [math]N[/math] and [math]M[/math] are in order to ask the question.The distribution related to a Bernoulli process that’s called a memoryless distribution is the geometric distribution. It answers the question: how many trials will it take to get the first success. It’s memoryless because the conditional probability of getting a success on the [math](m+n)^{\rm th}[/math] trial given that there was no success in the first [math]m[/math] trials is the same as the probability of getting a success on the [math]n^{\rm th}[/math] trial. That is, the past has no bearing on the future.

Geometric distribution?

I am not sure of my answer.

If x has a geometric distribution P(x=k) = p(1-p)^k-1
The sum from 1 to infinity is 1.
P(x=0 or 2 or 4 ....)=P(x is even) = (x=2)+P(x=4)+.....= 1/2

Geometric distribution?

This question is not using the geometric distribution.

the question asks what is the probability that 4 out of 200 people have the disease. This is a binomial question.

the geometric distribution counts the number of trials until the first success.

in this problem, Let X be the number of people with the disease. X has the binomial distribution with n = 200 trials and success probability p = 0.03.

In general, if X has the binomial distribution with n trials and a success probability of p then
P[X = x] = n!/(x!(n-x)!) * p^x * (1-p)^(n-x)
for values of x = 0, 1, 2, ..., n
P[X = x] = 0 for any other value of x.

P(X = 4) = 200! / (4! 196!) * (0.03)^4 * (0.97)^196
= 0.1338284

--------------------

if you wanted to use the geometric distribution the question would say, what is the probability that the fourth person sampled is the first with the disease?

Let Y be the number of trials before the first success.
Y has the Geometric distribution with success probability p - 0.03

P(Y = y) = p * (1-p) ^ (y-1) for y = 1, 2, 3, 4, 5, ......
P(Y = y) = 0 for all other values of y

P(Y = 4) = 0.03 * (0.97)^4 = 0.02655878

Geometric Distribution?

You're approaching it from the wrong side. Adding up all the possibilities is too much work, and will lead to errors.

Whenever the desired outcome (first defective is one of the first six) has multiple possibilities, you should look at it's opposite:

first defective is after the first six

That means the first six are not defective. The probability of this is:

(1 - 2.4%) ^ 6 = 0.976 ^ 6 = 0.864368449

So the probability you want is:

1 - 0.864368449 = 0.135631551

which is about 13.56%

Why is geometric distribution memoryless, but binomial isn't? How can I understand it intuitively, beyond the formula proofs?

Why is geometric distribution memoryless, but binomial isn't? How can I understand it intuitively, beyond the formula proofs?The question doesn’t really apply to the binomial distribution. Any sequence of independent trials is memoryless in the sense that earlier trials have no effect on later trials.If you do a sequence of Bernoulli trials and count trials until a success occurs, then so long as you haven’t had that success further trials are independent. So it’s not that the distribution is memoryless, the sequence of trials is memoryless. The experiment is complete only when the first success occurs. Until then the number of trials remaining has the same distribution whatever number have already occurred. And that is the memoryless property of the sequence.But for the binomial, the experiment finishes when a set number of trials have been done. Part way through the number of successes from that point on is limited by the number of trials remaining. Clearly not memoryless.

Geometric Distribution Question?

For a 12-sided die, what is the probability that the first 10 will be on the third roll?

My solution:

p= 1/12 = 0.083

q= 11/12 = 0.92

x= 3

P(X) = (0.083)^3 x 0.92
= 0.00526044

However, the answer in the back of the text book is 0.0700....

What am I doing wrong?

What is the intuition behind the memory-less property of geometric and exponential random variables?

Geometric DistributionA Geometric distribution with parameter p can be considered as the number of trials of independent Bernoulli(p) random variables until the first success. Consider a coin that lands heads with probability p. One flip of the coin is a Bernouilli(p) trial. The number of flips until you see the first head is distributed as Geometric(p).So why does this show the Memoryless property?Simply and most intuitively, if I pick up my unfair coin now and start trials to generate a Geometric variable, does what I did with the coin before I started make any difference? If I flip the coin a hundred times before my test starts, will I get any different answer once I do start trials? I would say that this is intuitively an obvious no!Exponential DistributionThe slightly bigger “intuitive leap” is when we think about the exponential distribution. The exponential distribution is continuous, so we cannot talk about discrete events like a coin flip.We can, however, think of the exponential distribution as a limit of a geometric distribution where we divide the continuous domain into smaller intervals.First, let us divide the domain into slices of length 1, and treat each slice as a trial with probability p. The time until our first success is geometrically distributed with parameter p, and so has the Memoryless property.Now let us divide the domain into smaller slices of length x, and treat each as a trial with probability x*p. The time until our first success is now distributed as a geometric distribution with parameter x*p, multiplied by (1/x). This is just a constant multiple of the geometric distribution, so it is again Memoryless.This x can get as small as we like, and as it tends to zero we get the Exponential distribution (the proof is left to the reader!). Hopefully it is intuitive to see that this limit will retain the Memoryless property.

TRENDING NEWS