# Probability Theory

Random Variables

A random variable is a numerical description of the outcome of a statistical experiment.

Types of Random Variable

Random variables whose set of values can be written either as a finite sequence x1,x2….xn, or as an infinite sequence x1 are said to be discrete. For example a random variable whose set of possible values is the set of non negative integers is a discrete random variable.

There are also random variables that take continuum of possible values. These are called continuous random variables

# Probability Distribution

The probability distribution for a random variable describes how the probabilities are distributed over the values of the random variable.

Discrete Distribution

For a discrete random variable, x, the probability distribution is defined by a probability mass function, denoted by f(x). This function provides the probability for each value of the random variable. In the development of the probability function for a discrete random variable, two conditions must be satisfied: (1) f(x) must be nonnegative for each value of the random variable, and (2) the sum of the probabilities for each value of the random variable must equal one.

Continuous Distribution

A continuous random variable may assume any value in an interval on the real number line or in a collection of intervals. Since there is an infinite number of values in any interval, it is not meaningful to talk about the probability that the random variable will take on a specific value; instead, the probability that a continuous random variable will lie within a given interval is considered.

In the continuous case, the counterpart of the probability mass function is the probability density function, also denoted by f(x). For a continuous random variable, the probability density function provides the height or value of the function at any particular value of x; it does not directly give the probability of the random variable taking on a specific value. However, the area under the graph of f(x) corresponding to some interval, obtained by computing the integral of f(x) over that interval, provides the probability that the variable will take on a value within that interval.

Joint Distribution

In general, if X and Y are two random variables, the probability distribution that defines their simultaneous behavior is called a joint probability distribution

If X and Y are discrete, this distribution can be described with a joint probability mass function.

If X and Y are continuous, this distribution can be described with a joint probability density function.

# Expectation

One of the most important concepts in probability theory is that of the expectation of a random variable. If X is a discrete random variable taking on the possible values x1,x2,x3… then the expectation or expected value of X denoted by E[X],is defined by

We can also define the expectation of a continuous random variable. Suppose that X is a continuous random variable with probability density function f. Since dx for small f(x)~ P{x<X<x+dx} it follows that a weighted average of all possible values of X, with the weight given to x equal to the probability that X is near x,it is just the integral over all x of x.f(x). Hence, it is natural to define the expected value of X by,

# Variance

Given a random variable X along with its probability distribution it would be extremely useful to be able to summarize the essential properties of the mass function by certain suitably defined measures. One such measure would be E[X], the expected value of X. However while E[X] yields weighted average of the possible values of X, it doesn’t tell us anything about the variation or spread of these values.

So how do we measure the variation of the spread? One reasonable method would be to look how far x value is from the mean. One method to do this would be E[X-μ]. However this is mathematically inconvenient to deal with this value and so a more tractable way would be squared difference between X and its mean.

Thus variance can be defined by,

# Special Random Variables

The Binomial Random Variables

The binomial probability mass function provides the probability that x successes will occur in n trials of a binomial experiment.

The Poisson Random Variable

A random variable X, taking on one of the values 0,1,2… is said to be a Poisson random variable with parameter λ, λ>0 if its probability mass function is given by

The Poisson distribution graph is given by

The Poisson random variable has a wide range of applications in variety of areas because it may be used as an approximation for a binomial random variable with parameters (n,p) when n is large and p is small.

Some examples of random variables that follow to a good approximation,the Poisson probability law are

1. The number of misprints on a page
2. The number of people in a community living to 100 years of age
3. The number of wrong telephone numbers that are dialed in a day.

Hypergeometric Random Variable

The hypergeometric distribution, the probability of y successes when sampling without replacement n items from a population with s successes and N − s failures, is

Uniform Random Variable

A random variable X is said to be uniformly distributed if its probability density function is given by

The distribution is given by

Normal Random Variable

A random variable is said to be normally distributed if its density is given by

The distribution is given by

A normal random variable with mean as 0 and variance as 1 is called Standard Normal as is denoted by Z.

Exponential Random Variable

A continuous random variable whose probability distribution is given by

The distribution is given by

The exponential distribution graph often arises as being the distribution of the amount of time until some specific event occurs. For example the amount of time (starting from now) until an earthquake occurs, or until a new war breaks.

Hi I am Sanchita, an engineer, a math enthusiast, an AlmaBetter Datascience trainee and writer at Analytics Vidhya