Recent posts in Probability

1

https://www.probabilitycourse.com/

The webpage of Hossein Pishro-textbook Nik's Introduction to Probability, Statistics, and Random Processes. It is an open access peer-reviewed textbook designed for undergraduate and first-year graduate level courses on the subject. This probability textbook may be utilised by both students and practitioners in engineering, mathematics, finance, and other relevant subjects.

Mohitdas posted in Probability Dec 29, 2021
1,285 views
2

​​​​​​1. Overview

In this article, we’ll discuss the geometric probability distribution and its properties. We will prove each property mathematically and understand its significance. This article assumes basic knowledge of discrete mathematics and algebra for proofs.

2. Basic Definitions

We begin with a few basic definitions that will set the stage for things to come.

Sample space: A collection or set of discrete points $\Omega$ that is countable. A set of values $S$ is said to be countable when there exists a bijection $f: S → A$ where $A$ is a known countable set, such as the set of integers. Each point in the sample space represents an outcome.

A sample space may also be made of a continuous interval of points, but that is outside the scope of this article.

Conditional probability: Given events $A$ and $B$ as subsets of $\Omega$, the value $P(A | B)$ is the probability that $A$ occurs given that $B$ has already occurred. It is the probability of $A$ scaled down to the conditional universe of $B$.

Discrete random variable: A random variable $X$ is a function $g: \Omega → A$ where $A$ is a subset of $\mathbb{R}$ and $\Omega$ is the sample space. Think of $X$ as taking on input values in $\Omega$ with a certain probability and providing a corresponding meaningful output. For example: we have a set of students and we want to measure their weights. The set of students is our sample space $\Omega$. $W$ could be a random variable that takes any student in this set as input, and outputs a weight value associated with the student. The more frequently a weight value is seen, the more probability it has of occurring.

Probability mass function (PMF): A function $f: A → [0, 1]$ where $A$ is the subset of values that $X$ takes on, as given above. We usually use the notation $P(X = x)$ where $x\ \epsilon\ A$.

Cumulative distribution function (CDF): In terms of our notation, it is the function $P(X \le x)$. 

A random variable $X$ is geometric if it adheres to the properties of the geometric distribution.

3. The Geometric Probability Mass Function

A geometric random variable $X$ represents the number of trials it takes for an event of interest to occur. Each trial is independent and has no bearing on the others. $X$ takes on the values $1$, $2$, $3$ and so on.

For example, we might want to measure the number of times a coin is tossed until a head appears. Say the heads outcome occurs with probability $p$. This implies that the only other outcome, the tails, happens with probability $1\ – p$. The geometric PMF then takes the form -:

$P(X = x) = (1\ – p)^{x\ – 1}p$

This means that $x\ – 1$ trials failed before the success occurred. Visually, this looks like a downward sloping set of discrete points on the XY plane. The sample space is plotted on the X-axis and the probability of each point is plotted on the Y-axis. The height of a point in this plot tells us how probable it is, and these heights decrease in the form of a geometric progression.

A more contrived example could involve a game with a number of outcomes. One of the outcomes signifies the end of the game. The others force the game to repeat. We are to measure the number of rounds the game takes before it completes.

4. The Geometric Cumulative Distribution Function

The CDF gives us a convenient measure of the probability of input points up to a given baseline $x$. For geometric distributions, it takes on the form -:

$P(X \le x) = 1\ – (1\ – p)^x$

A proof of the above fact follows from taking the summation -:

$ P(X \le x) = \sum_{k = 1}^{x}(1\ – p)^{k\ – 1}p$

                   $ = p\sum_{k = 1}^{x}(1\ – p)^{k\ – 1}$

                   $ = p\frac{1\ –\ (1\ –\ p)^{x\ – 1 + 1}}{1\ –\ (1\ –\ p)}$     [geometric progression]

                   $ = 1\ – (1\ – p)^x$         [cancel out $p$]

5. Expected Value of a Geometric Random Variable

The expectation of a random variable $E[X]$ is fundamentally a weighted average. Each real value in the output set is weighted by the probability that it occurs. If more points in the sample space correspond to a particular output, that output has a greater weightage. It is also called the mean of the distribution. It can be described by the general formula -:

$E[X] = \sum_{k = a}^{b}k\ P(X = k)$

The expectation of a geometric random variable in particular looks like -:

$E[X] = \sum_{k = 1}^{\infty}k\ (1\ – p)^{k\ – 1}p$

This summation has a closed form which we can extract through the use of calculus. We describe this manipulation below. First, note that the sum of an infinite geometric progression can be described as -:

$\sum_{n = 0}^{\infty}x^n = \frac{1}{1\ –\ x}$

Differentiate on both sides with respect to $x$ to get -:

$\sum_{n = 1}^{\infty}nx^{n\ – 1} = \frac{1}{(1\ –\ x)^2}$

Using the above result, we proceed as follows -:

$E[X] = p\sum_{k = 1}^{\infty}k\ (1\ – p)^{k\ – 1}$

           $ = p\frac{1}{(1\ –\ (1\ –\ p))^2}$

           $ = p\frac{1}{p^2}$

           $ = \frac{1}{p}$

6. Variance of a Geometric Random Variable**

The variance of a random variable $Var(X)$ measures the spread of the distribution. It represents the expected average square distance of $X$ from the mean. In general,

$Var(X) = E[(X\ – \mu)^2] = E[(X\ – E[X])^2]$

The variance can also be calculated using the formula -:

$Var(X) = E[X^2]\ – (E[X])^2$

In the case of the geometric distribution, $E[X] = \frac{1}{p}$ and $(E[X])^2 = \frac{1}{p^2}$. It remains to calculate $E[X^2]$. We do this by algebraic manipulation and application of linearity of expectation as follows -:

$E[X^2] = E[X^2\ – X + X] = E[X(X\ – 1) + X] = E[X(X\ – 1)] + E[X]$.

Further, let $1\ – p = q$. Note that $X(X\ – 1)$ is itself a random variable, as it is the function of a random variable. Therefore, it makes sense to be able to calculate its expectation.

$E[X(X\ – 1)] = p\sum_{x = 1}^{\infty}x\ (x\ – 1)\ q^{x\ – 1}$

This is equivalent to writing -:

 $E[X(X\ – 1)] = p\frac{d}{dq}\left (\sum_{x = 1}^{\infty}(x\ – 1)\ q^x \right )$

                         $ = p\frac{d}{dq}\left (q^2\ \sum_{x = 2}^{\infty}(x\ – 1)\ q^{x\ – 2}\right )$

                         $ = p\frac{d}{dq}\left (q^2\ \frac{d}{dq}\left (\sum_{x = 2}^{\infty}q^{x\ – 1}\right )\right )$

                         $ = p\frac{d}{dq}\left (q^2\ \frac{d}{dq}\left (\sum_{x = 1}^{\infty}q^{x}\right )\right )$

                         $ = p\frac{d}{dq}\left (q^2\ \frac{d}{dq}\left (\frac{1}{1\ –\ q}\ – 1\right )\right )$

                         $ = p\frac{d}{dq}\left (q^2\ \left (\frac{1}{(1\ –\ q)^2}\right )\right )$

                         $ = p\ \frac{2q}{(1\ –\ q)^3}$

                         $ = p\ \frac{2(1\ –\ p)}{(1\ –\ (1\ –\ p))^3}$

                         $ = \frac{2\ –\ 2p}{p^2}$

Plugging in this result, we get -:

$Var(X) = E[X(X\ – 1)] + E[X]\ – (E[X])^2$

               $ = \frac{2\ –\ 2p + p\ –\ 1}{p^2}$

               $ = \frac{1\ –\ p}{p^2}$

7. Memorylessness of the Geometric Distribution

An interesting property inherent in the geometric distribution is its memorylessness. If we were to begin observing the experiment after a given number of trials, the remaining trials would follow the same distribution as the trials that have elapsed. It is as if we had started over!

Suppose we start observing after $a$ trials have elapsed, and a further $b$ trials occur. We claim that -:

$P(X \gt a + b | X \gt a) = P(X \gt b)$

To prove this, first note that -:

$P(A | B) = \frac{P(A\ \cap\ B)}{P(B)}$

So we have that -:

$P(X \gt a + b | X \gt a) = \frac{P(X\ \gt\ a + b\ \cap\ X\ \gt\ a)}{P(X\ \gt\ a)}$

But $X \gt a + b \cap\ X \gt a = X \gt a + b$

Therefore we have -:

$P(X \gt a + b | X \gt a) = \frac{P(X\ \gt\ a + b)}{P(X\ \gt\ a)}$

Here we may use the CDF to quickly calculate $P(X \gt x)$ style probabilities.

$P(X \gt x) = 1\ – P(X \le x) = (1\ – p)^x$

Thus we get -:

$P(X \gt a + b | X \gt a) = \frac{(1\ –\ p)^{a + b}}{(1\ –\ p)^a} = (1\ – p)^b = P(X \gt b)$

8. Conclusion

In this article, we discussed the various properties of the geometric distribution. We understood which situations can be modelled by this distribution. Further, we algebraically proved each property and underlined what they really mean.

pritishc posted in Probability Jun 16, 2021 edited Jun 17, 2021 by pritishc
853 views
3

 

 
Day Date Contents Slides Assignments
 1  Sep 14 Introduction to probability- simple problems- Rolling a die n times and find the probability of getting at least one six, 
probability of forming two men and two women out of four people
 Lecture Notes - Best one  
2 Sep 17 Class Test- 5 Questions: Time-15 minutes
Problems
1. 3 men and 3 women are to be seated (i) In a row of chairs (ii) Around a table. What is the probability that all women are seated together in both cases?
2. Divide 52 cards into 4 sets of 13. What is the probability that each set has an Ace?
Answers in different methods.
3. Roll a die 12 times- probability that each digit appears exactly twice
Question Paper-Test1  
3 Sep 18 Probability Space- pairwise disjoint events 
x^2+y^2 < 1 -to find probability of an area in this circular portion
Problems
1)In a class there are 9 students and each of them buy a gift and distribute randomly. What is the probability that
i) A particular student gets back his/her own gift.
ii)Any of the student get back his/her gift.
2) Birthday problem
3) Coupon collector problem
Conditional Probability
Problem: Two fair blind-folded coins is tossed. Given that atleast one head is tossed. What is the probability that both tosses are heads?
First Bayes' formula- Problem: Flip a fair coin. If a head is comes out, then a dice is rolled. If a tail comes out, two dice are rolled. Compute the probability of getting exactly one six? 
Independent events
 
 
 4  Sep 20 Bernoulli Trial
Problem: A and B play a series of games. A lost first game. What is the probability that A wins the series?
Random variable- X : Ω → R
Example : Tossing two fair coins. X is the random variable denoting number of heads.
Probability Mass Function(PMF)- examples
Expectation, Variation, Standard deviation - with examples
Linearity of expectation
 
   
 5  Sep 21-Sep 24 Solve questions from GO pdf and  Lecture Notes    
  6 Sep 27 Continuous probability distributions - Probability density function
Uniform random variable - Problem:Assume that X is uniform on [0, 1]. What is P (X ∈ Q)?What is the probability that the binary expansion of X starts with 0.010?
Exponential random variable- Memoryless property: P (X ≥ x + y|X ≥ y) = e^ {−λx}
Problem: Assume that a lightbulb lasts on average 100 hours. Assuming exponential distribution, compute the probability that it lasts more than 200 hours and the probability that it lasts less than 50 hours.
Normal random variable- standard normal distribution
Problem: Assume that X is Normal with mean μ = 2 and variance σ^2 = 25. Compute the probability that X is between 1 and 4.
Expectation, Variance, PDF, Cumulative function of continuous random variables
Examples
     

For more information visit GO Classroom: https://classroom.gateoverflow.in/course/view.php?id=12

Manoja Rajalakshmi A posted in Probability Sep 27, 2018
1,243 views
4
I am not able to solve previous years questions of GATE. I learnt the basics but getting struck in almost every question. Please suggest some resource for the same considering the time constraint.
Bad_Doctor posted in Probability Dec 11, 2017
3,284 views
To see more, click for the full list of questions or popular tags.