Stats

SymPy statistics module

Introduces a random variable type into the SymPy language.

Random variables may be declared using prebuilt functions such as Normal, Exponential, Coin, Die, etc… or built with functions like FiniteRV.

Queries on random expressions can be made using the functions

Expression

Meaning

P(condition)

Probability

E(expression)

Expected value

H(expression)

Entropy

variance(expression)

Variance

density(expression)

Probability Density Function

sample(expression)

Produce a realization

where(condition)

Where the condition is true

Examples

>>> from sympy.stats import P, E, variance, Die, Normal
>>> from sympy import Eq, simplify
>>> X, Y = Die('X', 6), Die('Y', 6) # Define two six sided dice
>>> Z = Normal('Z', 0, 1) # Declare a Normal random variable with mean 0, std 1
>>> P(X>3) # Probability X is greater than 3
1/2
>>> E(X+Y) # Expectation of the sum of two dice
7
>>> variance(X+Y) # Variance of the sum of two dice
35/6
>>> simplify(P(Z>1)) # Probability of Z being greater than 1
1/2 - erf(sqrt(2)/2)/2

One could also create custom distribution and define custom random variables as follows:

  1. If you want to create a Continuous Random Variable:

>>> from sympy.stats import ContinuousRV, P, E
>>> from sympy import exp, Symbol, Interval, oo
>>> x = Symbol('x')
>>> pdf = exp(-x) # pdf of the Continuous Distribution
>>> Z = ContinuousRV(x, pdf, set=Interval(0, oo))
>>> E(Z)
1
>>> P(Z > 5)
exp(-5)

1.1 To create an instance of Continuous Distribution:

>>> from sympy.stats import ContinuousDistributionHandmade
>>> from sympy import Lambda
>>> dist = ContinuousDistributionHandmade(Lambda(x, pdf), set=Interval(0, oo))
>>> dist.pdf(x)
exp(-x)
  1. If you want to create a Discrete Random Variable:

>>> from sympy.stats import DiscreteRV, P, E
>>> from sympy import Symbol, S
>>> p = S(1)/2
>>> x = Symbol('x', integer=True, positive=True)
>>> pdf = p*(1 - p)**(x - 1)
>>> D = DiscreteRV(x, pdf, set=S.Naturals)
>>> E(D)
2
>>> P(D > 3)
1/8

2.1 To create an instance of Discrete Distribution:

>>> from sympy.stats import DiscreteDistributionHandmade
>>> from sympy import Lambda
>>> dist = DiscreteDistributionHandmade(Lambda(x, pdf), set=S.Naturals)
>>> dist.pdf(x)
2**(1 - x)/2
  1. If you want to create a Finite Random Variable:

>>> from sympy.stats import FiniteRV, P, E
>>> from sympy import Rational
>>> pmf = {1: Rational(1, 3), 2: Rational(1, 6), 3: Rational(1, 4), 4: Rational(1, 4)}
>>> X = FiniteRV('X', pmf)
>>> E(X)
29/12
>>> P(X > 3)
1/4

3.1 To create an instance of Finite Distribution:

>>> from sympy.stats import FiniteDistributionHandmade
>>> dist = FiniteDistributionHandmade(pmf)
>>> dist.pmf(x)
Lambda(x, Piecewise((1/3, Eq(x, 1)), (1/6, Eq(x, 2)), (1/4, Eq(x, 3) | Eq(x, 4)), (0, True)))

Random Variable Types

Finite Types

sympy.stats.DiscreteUniform(name, items)[source]

Create a Finite Random Variable representing a uniform distribution over the input set.

Parameters

items: list/tuple

Items over which Uniform distribution is to be made

Returns

RandomSymbol

Examples

>>> from sympy.stats import DiscreteUniform, density
>>> from sympy import symbols
>>> X = DiscreteUniform('X', symbols('a b c')) # equally likely over a, b, c
>>> density(X).dict
{a: 1/3, b: 1/3, c: 1/3}
>>> Y = DiscreteUniform('Y', list(range(5))) # distribution over a range
>>> density(Y).dict
{0: 1/5, 1: 1/5, 2: 1/5, 3: 1/5, 4: 1/5}

References

R708

https://en.wikipedia.org/wiki/Discrete_uniform_distribution

R709

http://mathworld.wolfram.com/DiscreteUniformDistribution.html

sympy.stats.Die(name, sides=6)[source]

Create a Finite Random Variable representing a fair die.

Parameters

sides: Integer

Represents the number of sides of the Die, by default is 6

Returns

RandomSymbol

Examples

>>> from sympy.stats import Die, density
>>> from sympy import Symbol
>>> D6 = Die('D6', 6) # Six sided Die
>>> density(D6).dict
{1: 1/6, 2: 1/6, 3: 1/6, 4: 1/6, 5: 1/6, 6: 1/6}
>>> D4 = Die('D4', 4) # Four sided Die
>>> density(D4).dict
{1: 1/4, 2: 1/4, 3: 1/4, 4: 1/4}
>>> n = Symbol('n', positive=True, integer=True)
>>> Dn = Die('Dn', n) # n sided Die
>>> density(Dn).dict
Density(DieDistribution(n))
>>> density(Dn).dict.subs(n, 4).doit()
{1: 1/4, 2: 1/4, 3: 1/4, 4: 1/4}
sympy.stats.Bernoulli(name, p, succ=1, fail=0)[source]

Create a Finite Random Variable representing a Bernoulli process.

Parameters

p : Rational number between 0 and 1

Represents probability of success

succ : Integer/symbol/string

Represents event of success

fail : Integer/symbol/string

Represents event of failure

Returns

RandomSymbol

Examples

>>> from sympy.stats import Bernoulli, density
>>> from sympy import S
>>> X = Bernoulli('X', S(3)/4) # 1-0 Bernoulli variable, probability = 3/4
>>> density(X).dict
{0: 1/4, 1: 3/4}
>>> X = Bernoulli('X', S.Half, 'Heads', 'Tails') # A fair coin toss
>>> density(X).dict
{Heads: 1/2, Tails: 1/2}

References

R710

https://en.wikipedia.org/wiki/Bernoulli_distribution

R711

http://mathworld.wolfram.com/BernoulliDistribution.html

sympy.stats.Coin(name, p=1 / 2)[source]

Create a Finite Random Variable representing a Coin toss.

Parameters

p : Rational Numeber between 0 and 1

Represents probability of getting “Heads”, by default is Half

Returns

RandomSymbol

Examples

>>> from sympy.stats import Coin, density
>>> from sympy import Rational
>>> C = Coin('C') # A fair coin toss
>>> density(C).dict
{H: 1/2, T: 1/2}
>>> C2 = Coin('C2', Rational(3, 5)) # An unfair coin
>>> density(C2).dict
{H: 3/5, T: 2/5}

References

R712

https://en.wikipedia.org/wiki/Coin_flipping

sympy.stats.Binomial(name, n, p, succ=1, fail=0)[source]

Create a Finite Random Variable representing a binomial distribution.

Parameters

n : Positive Integer

Represents number of trials

p : Rational Number between 0 and 1

Represents probability of success

succ : Integer/symbol/string

Represents event of success, by default is 1

fail : Integer/symbol/string

Represents event of failure, by default is 0

Returns

RandomSymbol

Examples

>>> from sympy.stats import Binomial, density
>>> from sympy import S, Symbol
>>> X = Binomial('X', 4, S.Half) # Four "coin flips"
>>> density(X).dict
{0: 1/16, 1: 1/4, 2: 3/8, 3: 1/4, 4: 1/16}
>>> n = Symbol('n', positive=True, integer=True)
>>> p = Symbol('p', positive=True)
>>> X = Binomial('X', n, S.Half) # n "coin flips"
>>> density(X).dict
Density(BinomialDistribution(n, 1/2, 1, 0))
>>> density(X).dict.subs(n, 4).doit()
{0: 1/16, 1: 1/4, 2: 3/8, 3: 1/4, 4: 1/16}

References

R713

https://en.wikipedia.org/wiki/Binomial_distribution

R714

http://mathworld.wolfram.com/BinomialDistribution.html

sympy.stats.BetaBinomial(name, n, alpha, beta)[source]

Create a Finite Random Variable representing a Beta-binomial distribution.

Parameters

n : Positive Integer

Represents number of trials

alpha : Real positive number

beta : Real positive number

Returns

RandomSymbol

Examples

>>> from sympy.stats import BetaBinomial, density
>>> X = BetaBinomial('X', 2, 1, 1)
>>> density(X).dict
{0: 1/3, 1: 2*beta(2, 2), 2: 1/3}

References

R715

https://en.wikipedia.org/wiki/Beta-binomial_distribution

R716

http://mathworld.wolfram.com/BetaBinomialDistribution.html

sympy.stats.Hypergeometric(name, N, m, n)[source]

Create a Finite Random Variable representing a hypergeometric distribution.

Parameters

N : Positive Integer

Represents finite population of size N.

m : Positive Integer

Represents number of trials with required feature.

n : Positive Integer

Represents numbers of draws.

Returns

RandomSymbol

Examples

>>> from sympy.stats import Hypergeometric, density
>>> X = Hypergeometric('X', 10, 5, 3) # 10 marbles, 5 white (success), 3 draws
>>> density(X).dict
{0: 1/12, 1: 5/12, 2: 5/12, 3: 1/12}

References

R717

https://en.wikipedia.org/wiki/Hypergeometric_distribution

R718

http://mathworld.wolfram.com/HypergeometricDistribution.html

sympy.stats.FiniteRV(name, density, **kwargs)[source]

Create a Finite Random Variable given a dict representing the density.

Parameters

name : Symbol

Represents name of the random variable.

density: A dict

Dictionary conatining the pdf of finite distribution

check : bool

If True, it will check whether the given density integrates to 1 over the given set. If False, it will not perform this check. Default is False.

Returns

RandomSymbol

Examples

>>> from sympy.stats import FiniteRV, P, E
>>> density = {0: .1, 1: .2, 2: .3, 3: .4}
>>> X = FiniteRV('X', density)
>>> E(X)
2.00000000000000
>>> P(X >= 2)
0.700000000000000
sympy.stats.Rademacher(name)[source]

Create a Finite Random Variable representing a Rademacher distribution.

Returns

RandomSymbol

Examples

>>> from sympy.stats import Rademacher, density
>>> X = Rademacher('X')
>>> density(X).dict
{-1: 1/2, 1: 1/2}

References

R719

https://en.wikipedia.org/wiki/Rademacher_distribution

Discrete Types

sympy.stats.Geometric(name, p)[source]

Create a discrete random variable with a Geometric distribution.

The density of the Geometric distribution is given by

\[f(k) := p (1 - p)^{k - 1}\]
Parameters

p: A probability between 0 and 1

Returns

RandomSymbol

Examples

>>> from sympy.stats import Geometric, density, E, variance
>>> from sympy import Symbol, S
>>> p = S.One / 5
>>> z = Symbol("z")
>>> X = Geometric("x", p)
>>> density(X)(z)
(4/5)**(z - 1)/5
>>> E(X)
5
>>> variance(X)
20

References

R720

https://en.wikipedia.org/wiki/Geometric_distribution

R721

http://mathworld.wolfram.com/GeometricDistribution.html

sympy.stats.Hermite(name, a1, a2)[source]

Create a discrete random variable with a Hermite distribution.

The density of the Hermite distribution is given by

\[f(x):= e^{-a_1 -a_2}\sum_{j=0}^{\left \lfloor x/2 \right \rfloor} \frac{a_{1}^{x-2j}a_{2}^{j}}{(x-2j)!j!}\]
Parameters

a1: A Positive number greater than equal to 0.

a2: A Positive number greater than equal to 0.

Returns

RandomSymbol

Examples

>>> from sympy.stats import Hermite, density, E, variance
>>> from sympy import Symbol
>>> a1 = Symbol("a1", positive=True)
>>> a2 = Symbol("a2", positive=True)
>>> x = Symbol("x")
>>> H = Hermite("H", a1=5, a2=4)
>>> density(H)(2)
33*exp(-9)/2
>>> E(H)
13
>>> variance(H)
21

References

R722

https://en.wikipedia.org/wiki/Hermite_distribution

sympy.stats.Poisson(name, lamda)[source]

Create a discrete random variable with a Poisson distribution.

The density of the Poisson distribution is given by

\[f(k) := \frac{\lambda^{k} e^{- \lambda}}{k!}\]
Parameters

lamda: Positive number, a rate

Returns

RandomSymbol

Examples

>>> from sympy.stats import Poisson, density, E, variance
>>> from sympy import Symbol, simplify
>>> rate = Symbol("lambda", positive=True)
>>> z = Symbol("z")
>>> X = Poisson("x", rate)
>>> density(X)(z)
lambda**z*exp(-lambda)/factorial(z)
>>> E(X)
lambda
>>> simplify(variance(X))
lambda

References

R723

https://en.wikipedia.org/wiki/Poisson_distribution

R724

http://mathworld.wolfram.com/PoissonDistribution.html

sympy.stats.Logarithmic(name, p)[source]

Create a discrete random variable with a Logarithmic distribution.

The density of the Logarithmic distribution is given by

\[f(k) := \frac{-p^k}{k \ln{(1 - p)}}\]
Parameters

p: A value between 0 and 1

Returns

RandomSymbol

Examples

>>> from sympy.stats import Logarithmic, density, E, variance
>>> from sympy import Symbol, S
>>> p = S.One / 5
>>> z = Symbol("z")
>>> X = Logarithmic("x", p)
>>> density(X)(z)
-5**(-z)/(z*log(4/5))
>>> E(X)
-1/(-4*log(5) + 8*log(2))
>>> variance(X)
-1/((-4*log(5) + 8*log(2))*(-2*log(5) + 4*log(2))) + 1/(-64*log(2)*log(5) + 64*log(2)**2 + 16*log(5)**2) - 10/(-32*log(5) + 64*log(2))

References

R725

https://en.wikipedia.org/wiki/Logarithmic_distribution

R726

http://mathworld.wolfram.com/LogarithmicDistribution.html

sympy.stats.NegativeBinomial(name, r, p)[source]

Create a discrete random variable with a Negative Binomial distribution.

The density of the Negative Binomial distribution is given by

\[f(k) := \binom{k + r - 1}{k} (1 - p)^r p^k\]
Parameters

r: A positive value

p: A value between 0 and 1

Returns

RandomSymbol

Examples

>>> from sympy.stats import NegativeBinomial, density, E, variance
>>> from sympy import Symbol, S
>>> r = 5
>>> p = S.One / 5
>>> z = Symbol("z")
>>> X = NegativeBinomial("x", r, p)
>>> density(X)(z)
1024*5**(-z)*binomial(z + 4, z)/3125
>>> E(X)
5/4
>>> variance(X)
25/16

References

R727

https://en.wikipedia.org/wiki/Negative_binomial_distribution

R728

http://mathworld.wolfram.com/NegativeBinomialDistribution.html

sympy.stats.Skellam(name, mu1, mu2)[source]

Create a discrete random variable with a Skellam distribution.

The Skellam is the distribution of the difference N1 - N2 of two statistically independent random variables N1 and N2 each Poisson-distributed with respective expected values mu1 and mu2.

The density of the Skellam distribution is given by

\[f(k) := e^{-(\mu_1+\mu_2)}(\frac{\mu_1}{\mu_2})^{k/2}I_k(2\sqrt{\mu_1\mu_2})\]
Parameters

mu1: A non-negative value

mu2: A non-negative value

Returns

RandomSymbol

Examples

>>> from sympy.stats import Skellam, density, E, variance
>>> from sympy import Symbol, pprint
>>> z = Symbol("z", integer=True)
>>> mu1 = Symbol("mu1", positive=True)
>>> mu2 = Symbol("mu2", positive=True)
>>> X = Skellam("x", mu1, mu2)
>>> pprint(density(X)(z), use_unicode=False)
     z
     -
     2
/mu1\   -mu1 - mu2        /       _____   _____\
|---| *e          *besseli\z, 2*\/ mu1 *\/ mu2 /
\mu2/
>>> E(X)
mu1 - mu2
>>> variance(X).expand()
mu1 + mu2

References

R729

https://en.wikipedia.org/wiki/Skellam_distribution

sympy.stats.YuleSimon(name, rho)[source]

Create a discrete random variable with a Yule-Simon distribution.

The density of the Yule-Simon distribution is given by

\[f(k) := \rho B(k, \rho + 1)\]
Parameters

rho: A positive value

Returns

RandomSymbol

Examples

>>> from sympy.stats import YuleSimon, density, E, variance
>>> from sympy import Symbol, simplify
>>> p = 5
>>> z = Symbol("z")
>>> X = YuleSimon("x", p)
>>> density(X)(z)
5*beta(z, 6)
>>> simplify(E(X))
5/4
>>> simplify(variance(X))
25/48

References

R730

https://en.wikipedia.org/wiki/Yule%E2%80%93Simon_distribution

sympy.stats.Zeta(name, s)[source]

Create a discrete random variable with a Zeta distribution.

The density of the Zeta distribution is given by

\[f(k) := \frac{1}{k^s \zeta{(s)}}\]
Parameters

s: A value greater than 1

Returns

RandomSymbol

Examples

>>> from sympy.stats import Zeta, density, E, variance
>>> from sympy import Symbol
>>> s = 5
>>> z = Symbol("z")
>>> X = Zeta("x", s)
>>> density(X)(z)
1/(z**5*zeta(5))
>>> E(X)
pi**4/(90*zeta(5))
>>> variance(X)
-pi**8/(8100*zeta(5)**2) + zeta(3)/zeta(5)

References

R731

https://en.wikipedia.org/wiki/Zeta_distribution

Continuous Types

sympy.stats.Arcsin(name, a=0, b=1)[source]

Create a Continuous Random Variable with an arcsin distribution.

The density of the arcsin distribution is given by

\[f(x) := \frac{1}{\pi\sqrt{(x-a)(b-x)}}\]

with \(x \in (a,b)\). It must hold that \(-\infty < a < b < \infty\).

Parameters

a : Real number, the left interval boundary

b : Real number, the right interval boundary

Returns

RandomSymbol

Examples

>>> from sympy.stats import Arcsin, density, cdf
>>> from sympy import Symbol
>>> a = Symbol("a", real=True)
>>> b = Symbol("b", real=True)
>>> z = Symbol("z")
>>> X = Arcsin("x", a, b)
>>> density(X)(z)
1/(pi*sqrt((-a + z)*(b - z)))
>>> cdf(X)(z)
Piecewise((0, a > z),
        (2*asin(sqrt((-a + z)/(-a + b)))/pi, b >= z),
        (1, True))

References

R732

https://en.wikipedia.org/wiki/Arcsine_distribution

sympy.stats.Benini(name, alpha, beta, sigma)[source]

Create a Continuous Random Variable with a Benini distribution.

The density of the Benini distribution is given by

\[f(x) := e^{-\alpha\log{\frac{x}{\sigma}} -\beta\log^2\left[{\frac{x}{\sigma}}\right]} \left(\frac{\alpha}{x}+\frac{2\beta\log{\frac{x}{\sigma}}}{x}\right)\]

This is a heavy-tailed distribution and is also known as the log-Rayleigh distribution.

Parameters

alpha : Real number, \(\alpha > 0\), a shape

beta : Real number, \(\beta > 0\), a shape

sigma : Real number, \(\sigma > 0\), a scale

Returns

RandomSymbol

Examples

>>> from sympy.stats import Benini, density, cdf
>>> from sympy import Symbol, pprint
>>> alpha = Symbol("alpha", positive=True)
>>> beta = Symbol("beta", positive=True)
>>> sigma = Symbol("sigma", positive=True)
>>> z = Symbol("z")
>>> X = Benini("x", alpha, beta, sigma)
>>> D = density(X)(z)
>>> pprint(D, use_unicode=False)
/                  /  z  \\             /  z  \            2/  z  \
|        2*beta*log|-----||  - alpha*log|-----| - beta*log  |-----|
|alpha             \sigma/|             \sigma/             \sigma/
|----- + -----------------|*e
\  z             z        /
>>> cdf(X)(z)
Piecewise((1 - exp(-alpha*log(z/sigma) - beta*log(z/sigma)**2), sigma <= z),
        (0, True))

References

R733

https://en.wikipedia.org/wiki/Benini_distribution

R734

http://reference.wolfram.com/legacy/v8/ref/BeniniDistribution.html

sympy.stats.Beta(name, alpha, beta)[source]

Create a Continuous Random Variable with a Beta distribution.

The density of the Beta distribution is given by

\[f(x) := \frac{x^{\alpha-1}(1-x)^{\beta-1}} {\mathrm{B}(\alpha,\beta)}\]

with \(x \in [0,1]\).

Parameters

alpha : Real number, \(\alpha > 0\), a shape

beta : Real number, \(\beta > 0\), a shape

Returns

RandomSymbol

Examples

>>> from sympy.stats import Beta, density, E, variance
>>> from sympy import Symbol, simplify, pprint, factor
>>> alpha = Symbol("alpha", positive=True)
>>> beta = Symbol("beta", positive=True)
>>> z = Symbol("z")
>>> X = Beta("x", alpha, beta)
>>> D = density(X)(z)
>>> pprint(D, use_unicode=False)
 alpha - 1        beta - 1
z         *(1 - z)
--------------------------
      B(alpha, beta)
>>> simplify(E(X))
alpha/(alpha + beta)
>>> factor(simplify(variance(X)))
alpha*beta/((alpha + beta)**2*(alpha + beta + 1))

References

R735

https://en.wikipedia.org/wiki/Beta_distribution

R736

http://mathworld.wolfram.com/BetaDistribution.html

sympy.stats.BetaNoncentral(name, alpha, beta, lamda)[source]

Create a Continuous Random Variable with a Type I Noncentral Beta distribution.

The density of the Noncentral Beta distribution is given by

\[f(x) := \sum_{k=0}^\infty e^{-\lambda/2}\frac{(\lambda/2)^k}{k!} \frac{x^{\alpha+k-1}(1-x)^{\beta-1}}{\mathrm{B}(\alpha+k,\beta)}\]

with \(x \in [0,1]\).

Parameters

alpha : Real number, \(\alpha > 0\), a shape

beta : Real number, \(\beta > 0\), a shape

lamda: Real number, `lambda >= 0`, noncentrality parameter

Returns

RandomSymbol

Examples

>>> from sympy.stats import BetaNoncentral, density, cdf
>>> from sympy import Symbol, pprint
>>> alpha = Symbol("alpha", positive=True)
>>> beta = Symbol("beta", positive=True)
>>> lamda = Symbol("lamda", nonnegative=True)
>>> z = Symbol("z")
>>> X = BetaNoncentral("x", alpha, beta, lamda)
>>> D = density(X)(z)
>>> pprint(D, use_unicode=False)
  oo
_____
\    `
 \                                              -lamda
  \                          k                  -------
   \    k + alpha - 1 /lamda\         beta - 1     2
    )  z             *|-----| *(1 - z)        *e
   /                  \  2  /
  /    ------------------------------------------------
 /                  B(k + alpha, beta)*k!
/____,
k = 0

Compute cdf with specific ‘x’, ‘alpha’, ‘beta’ and ‘lamda’ values as follows : >>> cdf(BetaNoncentral(“x”, 1, 1, 1), evaluate=False)(2).doit() 2*exp(1/2)

The argument evaluate=False prevents an attempt at evaluation of the sum for general x, before the argument 2 is passed.

References

R737

https://en.wikipedia.org/wiki/Noncentral_beta_distribution

R738

https://reference.wolfram.com/language/ref/NoncentralBetaDistribution.html

sympy.stats.BetaPrime(name, alpha, beta)[source]

Create a continuous random variable with a Beta prime distribution.

The density of the Beta prime distribution is given by

\[f(x) := \frac{x^{\alpha-1} (1+x)^{-\alpha -\beta}}{B(\alpha,\beta)}\]

with \(x > 0\).

Parameters

alpha : Real number, \(\alpha > 0\), a shape

beta : Real number, \(\beta > 0\), a shape

Returns

RandomSymbol

Examples

>>> from sympy.stats import BetaPrime, density
>>> from sympy import Symbol, pprint
>>> alpha = Symbol("alpha", positive=True)
>>> beta = Symbol("beta", positive=True)
>>> z = Symbol("z")
>>> X = BetaPrime("x", alpha, beta)
>>> D = density(X)(z)
>>> pprint(D, use_unicode=False)
 alpha - 1        -alpha - beta
z         *(z + 1)
-------------------------------
         B(alpha, beta)

References

R739

https://en.wikipedia.org/wiki/Beta_prime_distribution

R740

http://mathworld.wolfram.com/BetaPrimeDistribution.html

sympy.stats.BoundedPareto(name, alpha, left, right)[source]

Create a continuous random variable with a Bounded Pareto distribution.

The density of the Bounded Pareto distribution is given by

\[f(x) := \frac{\alpha L^{\alpha}x^{-\alpha-1}}{1-(\frac{L}{H})^{\alpha}}\]
Parameters

alpha : Real Number, \(alpha > 0\)

Shape parameter

left : Real Number, \(left > 0\)

Location parameter

right : Real Number, \(right > left\)

Location parameter

Returns

RandomSymbol

Examples

>>> from sympy.stats import BoundedPareto, density, cdf, E
>>> from sympy import symbols
>>> L, H = symbols('L, H', positive=True)
>>> X = BoundedPareto('X', 2, L, H)
>>> x = symbols('x')
>>> density(X)(x)
2*L**2/(x**3*(1 - L**2/H**2))
>>> cdf(X)(x)
Piecewise((-H**2*L**2/(x**2*(H**2 - L**2)) + H**2/(H**2 - L**2), L <= x), (0, True))
>>> E(X).simplify()
2*H*L/(H + L)

References

R741

https://en.wikipedia.org/wiki/Pareto_distribution#Bounded_Pareto_distribution

sympy.stats.Cauchy(name, x0, gamma)[source]

Create a continuous random variable with a Cauchy distribution.

The density of the Cauchy distribution is given by

\[f(x) := \frac{1}{\pi \gamma [1 + {(\frac{x-x_0}{\gamma})}^2]}\]
Parameters

x0 : Real number, the location

gamma : Real number, \(\gamma > 0\), a scale

Returns

RandomSymbol

Examples

>>> from sympy.stats import Cauchy, density
>>> from sympy import Symbol
>>> x0 = Symbol("x0")
>>> gamma = Symbol("gamma", positive=True)
>>> z = Symbol("z")
>>> X = Cauchy("x", x0, gamma)
>>> density(X)(z)
1/(pi*gamma*(1 + (-x0 + z)**2/gamma**2))

References

R742

https://en.wikipedia.org/wiki/Cauchy_distribution

R743

http://mathworld.wolfram.com/CauchyDistribution.html

sympy.stats.Chi(name, k)[source]

Create a continuous random variable with a Chi distribution.

The density of the Chi distribution is given by

\[f(x) := \frac{2^{1-k/2}x^{k-1}e^{-x^2/2}}{\Gamma(k/2)}\]

with \(x \geq 0\).

Parameters

k : Positive integer, The number of degrees of freedom

Returns

RandomSymbol

Examples

>>> from sympy.stats import Chi, density, E
>>> from sympy import Symbol, simplify
>>> k = Symbol("k", integer=True)
>>> z = Symbol("z")
>>> X = Chi("x", k)
>>> density(X)(z)
2**(1 - k/2)*z**(k - 1)*exp(-z**2/2)/gamma(k/2)
>>> simplify(E(X))
sqrt(2)*gamma(k/2 + 1/2)/gamma(k/2)

References

R744

https://en.wikipedia.org/wiki/Chi_distribution

R745

http://mathworld.wolfram.com/ChiDistribution.html

sympy.stats.ChiNoncentral(name, k, l)[source]

Create a continuous random variable with a non-central Chi distribution.

The density of the non-central Chi distribution is given by

\[f(x) := \frac{e^{-(x^2+\lambda^2)/2} x^k\lambda} {(\lambda x)^{k/2}} I_{k/2-1}(\lambda x)\]

with \(x \geq 0\). Here, \(I_\nu (x)\) is the modified Bessel function of the first kind.

Parameters

k : A positive Integer, \(k > 0\), the number of degrees of freedom

lambda : Real number, \(\lambda > 0\), Shift parameter

Returns

RandomSymbol

Examples

>>> from sympy.stats import ChiNoncentral, density
>>> from sympy import Symbol
>>> k = Symbol("k", integer=True)
>>> l = Symbol("l")
>>> z = Symbol("z")
>>> X = ChiNoncentral("x", k, l)
>>> density(X)(z)
l*z**k*(l*z)**(-k/2)*exp(-l**2/2 - z**2/2)*besseli(k/2 - 1, l*z)

References

R746

https://en.wikipedia.org/wiki/Noncentral_chi_distribution

sympy.stats.ChiSquared(name, k)[source]

Create a continuous random variable with a Chi-squared distribution.

The density of the Chi-squared distribution is given by

\[f(x) := \frac{1}{2^{\frac{k}{2}}\Gamma\left(\frac{k}{2}\right)} x^{\frac{k}{2}-1} e^{-\frac{x}{2}}\]

with \(x \geq 0\).

Parameters

k : Positive integer, The number of degrees of freedom

Returns

RandomSymbol

Examples

>>> from sympy.stats import ChiSquared, density, E, variance, moment
>>> from sympy import Symbol
>>> k = Symbol("k", integer=True, positive=True)
>>> z = Symbol("z")
>>> X = ChiSquared("x", k)
>>> density(X)(z)
2**(-k/2)*z**(k/2 - 1)*exp(-z/2)/gamma(k/2)
>>> E(X)
k
>>> variance(X)
2*k
>>> moment(X, 3)
k**3 + 6*k**2 + 8*k

References

R747

https://en.wikipedia.org/wiki/Chi_squared_distribution

R748

http://mathworld.wolfram.com/Chi-SquaredDistribution.html

sympy.stats.Dagum(name, p, a, b)[source]

Create a continuous random variable with a Dagum distribution.

The density of the Dagum distribution is given by

\[f(x) := \frac{a p}{x} \left( \frac{\left(\tfrac{x}{b}\right)^{a p}} {\left(\left(\tfrac{x}{b}\right)^a + 1 \right)^{p+1}} \right)\]

with \(x > 0\).

Parameters

p : Real number, \(p > 0\), a shape

a : Real number, \(a > 0\), a shape

b : Real number, \(b > 0\), a scale

Returns

RandomSymbol

Examples

>>> from sympy.stats import Dagum, density, cdf
>>> from sympy import Symbol
>>> p = Symbol("p", positive=True)
>>> a = Symbol("a", positive=True)
>>> b = Symbol("b", positive=True)
>>> z = Symbol("z")
>>> X = Dagum("x", p, a, b)
>>> density(X)(z)
a*p*(z/b)**(a*p)*((z/b)**a + 1)**(-p - 1)/z
>>> cdf(X)(z)
Piecewise(((1 + (z/b)**(-a))**(-p), z >= 0), (0, True))

References

R749

https://en.wikipedia.org/wiki/Dagum_distribution

sympy.stats.Erlang(name, k, l)[source]

Create a continuous random variable with an Erlang distribution.

The density of the Erlang distribution is given by

\[f(x) := \frac{\lambda^k x^{k-1} e^{-\lambda x}}{(k-1)!}\]

with \(x \in [0,\infty]\).

Parameters

k : Positive integer

l : Real number, \(\lambda > 0\), the rate

Returns

RandomSymbol

Examples

>>> from sympy.stats import Erlang, density, cdf, E, variance
>>> from sympy import Symbol, simplify, pprint
>>> k = Symbol("k", integer=True, positive=True)
>>> l = Symbol("l", positive=True)
>>> z = Symbol("z")
>>> X = Erlang("x", k, l)
>>> D = density(X)(z)
>>> pprint(D, use_unicode=False)
 k  k - 1  -l*z
l *z     *e
---------------
    Gamma(k)
>>> C = cdf(X)(z)
>>> pprint(C, use_unicode=False)
/lowergamma(k, l*z)
|------------------  for z > 0
<     Gamma(k)
|
\        0           otherwise
>>> E(X)
k/l
>>> simplify(variance(X))
k/l**2

References

R750

https://en.wikipedia.org/wiki/Erlang_distribution

R751

http://mathworld.wolfram.com/ErlangDistribution.html

sympy.stats.ExGaussian(name, mean, std, rate)[source]

Create a continuous random variable with an Exponentially modified Gaussian (EMG) distribution.

The density of the exponentially modified Gaussian distribution is given by

\[f(x) := \frac{\lambda}{2}e^{\frac{\lambda}{2}(2\mu+\lambda\sigma^2-2x)} \text{erfc}(\frac{\mu + \lambda\sigma^2 - x}{\sqrt{2}\sigma})\]

with \(x > 0\). Note that the expected value is \(1/\lambda\).

Parameters

mu : A Real number, the mean of Gaussian component

std: A positive Real number,

math

\(\sigma^2 > 0\) the variance of Gaussian component

lambda: A positive Real number,

math

\(\lambda > 0\) the rate of Exponential component

Returns

RandomSymbol

Examples

>>> from sympy.stats import ExGaussian, density, cdf, E
>>> from sympy.stats import variance, skewness
>>> from sympy import Symbol, pprint, simplify
>>> mean = Symbol("mu")
>>> std = Symbol("sigma", positive=True)
>>> rate = Symbol("lamda", positive=True)
>>> z = Symbol("z")
>>> X = ExGaussian("x", mean, std, rate)
>>> pprint(density(X)(z), use_unicode=False)
             /           2             \
       lamda*\lamda*sigma  + 2*mu - 2*z/
       ---------------------------------     /  ___ /           2         \\
                       2                     |\/ 2 *\lamda*sigma  + mu - z/|
lamda*e                                 *erfc|-----------------------------|
                                             \           2*sigma           /
----------------------------------------------------------------------------
                                     2
>>> cdf(X)(z)
-(erf(sqrt(2)*(-lamda**2*sigma**2 + lamda*(-mu + z))/(2*lamda*sigma))/2 + 1/2)*exp(lamda**2*sigma**2/2 - lamda*(-mu + z)) + erf(sqrt(2)*(-mu + z)/(2*sigma))/2 + 1/2
>>> E(X)
(lamda*mu + 1)/lamda
>>> simplify(variance(X))
sigma**2 + lamda**(-2)
>>> simplify(skewness(X))
2/(lamda**2*sigma**2 + 1)**(3/2)

References

R752

https://en.wikipedia.org/wiki/Exponentially_modified_Gaussian_distribution

sympy.stats.Exponential(name, rate)[source]

Create a continuous random variable with an Exponential distribution.

The density of the exponential distribution is given by

\[f(x) := \lambda \exp(-\lambda x)\]

with \(x > 0\). Note that the expected value is \(1/\lambda\).

Parameters

rate : A positive Real number, \(\lambda > 0\), the rate (or inverse scale/inverse mean)

Returns

RandomSymbol

Examples

>>> from sympy.stats import Exponential, density, cdf, E
>>> from sympy.stats import variance, std, skewness, quantile
>>> from sympy import Symbol
>>> l = Symbol("lambda", positive=True)
>>> z = Symbol("z")
>>> p = Symbol("p")
>>> X = Exponential("x", l)
>>> density(X)(z)
lambda*exp(-lambda*z)
>>> cdf(X)(z)
Piecewise((1 - exp(-lambda*z), z >= 0), (0, True))
>>> quantile(X)(p)
-log(1 - p)/lambda
>>> E(X)
1/lambda
>>> variance(X)
lambda**(-2)
>>> skewness(X)
2
>>> X = Exponential('x', 10)
>>> density(X)(z)
10*exp(-10*z)
>>> E(X)
1/10
>>> std(X)
1/10

References

R753

https://en.wikipedia.org/wiki/Exponential_distribution

R754

http://mathworld.wolfram.com/ExponentialDistribution.html

sympy.stats.FDistribution(name, d1, d2)[source]

Create a continuous random variable with a F distribution.

The density of the F distribution is given by

\[f(x) := \frac{\sqrt{\frac{(d_1 x)^{d_1} d_2^{d_2}} {(d_1 x + d_2)^{d_1 + d_2}}}} {x \mathrm{B} \left(\frac{d_1}{2}, \frac{d_2}{2}\right)}\]

with \(x > 0\).

Parameters

d1 : \(d_1 > 0\), where d_1 is the degrees of freedom (n_1 - 1)

d2 : \(d_2 > 0\), where d_2 is the degrees of freedom (n_2 - 1)

Returns

RandomSymbol

Examples

>>> from sympy.stats import FDistribution, density
>>> from sympy import Symbol, pprint
>>> d1 = Symbol("d1", positive=True)
>>> d2 = Symbol("d2", positive=True)
>>> z = Symbol("z")
>>> X = FDistribution("x", d1, d2)
>>> D = density(X)(z)
>>> pprint(D, use_unicode=False)
  d2
  --    ______________________________
  2    /       d1            -d1 - d2
d2  *\/  (d1*z)  *(d1*z + d2)
--------------------------------------
                /d1  d2\
             z*B|--, --|
                \2   2 /

References

R755

https://en.wikipedia.org/wiki/F-distribution

R756

http://mathworld.wolfram.com/F-Distribution.html

sympy.stats.FisherZ(name, d1, d2)[source]

Create a Continuous Random Variable with an Fisher’s Z distribution.

The density of the Fisher’s Z distribution is given by

\[f(x) := \frac{2d_1^{d_1/2} d_2^{d_2/2}} {\mathrm{B}(d_1/2, d_2/2)} \frac{e^{d_1z}}{\left(d_1e^{2z}+d_2\right)^{\left(d_1+d_2\right)/2}}\]
Parameters

d1 : \(d_1 > 0\), degree of freedom

d2 : \(d_2 > 0\), degree of freedom

Returns

RandomSymbol

Examples

>>> from sympy.stats import FisherZ, density
>>> from sympy import Symbol, pprint
>>> d1 = Symbol("d1", positive=True)
>>> d2 = Symbol("d2", positive=True)
>>> z = Symbol("z")
>>> X = FisherZ("x", d1, d2)
>>> D = density(X)(z)
>>> pprint(D, use_unicode=False)
                            d1   d2
    d1   d2               - -- - --
    --   --                 2    2
    2    2  /    2*z     \           d1*z
2*d1  *d2  *\d1*e    + d2/         *e
-----------------------------------------
                 /d1  d2\
                B|--, --|
                 \2   2 /

References

R757

https://en.wikipedia.org/wiki/Fisher%27s_z-distribution

R758

http://mathworld.wolfram.com/Fishersz-Distribution.html

sympy.stats.Frechet(name, a, s=1, m=0)[source]

Create a continuous random variable with a Frechet distribution.

The density of the Frechet distribution is given by

\[f(x) := \frac{\alpha}{s} \left(\frac{x-m}{s}\right)^{-1-\alpha} e^{-(\frac{x-m}{s})^{-\alpha}}\]

with \(x \geq m\).

Parameters

a : Real number, \(a \in \left(0, \infty\right)\) the shape

s : Real number, \(s \in \left(0, \infty\right)\) the scale

m : Real number, \(m \in \left(-\infty, \infty\right)\) the minimum

Returns

RandomSymbol

Examples

>>> from sympy.stats import Frechet, density, cdf
>>> from sympy import Symbol
>>> a = Symbol("a", positive=True)
>>> s = Symbol("s", positive=True)
>>> m = Symbol("m", real=True)
>>> z = Symbol("z")
>>> X = Frechet("x", a, s, m)
>>> density(X)(z)
a*((-m + z)/s)**(-a - 1)*exp(-((-m + z)/s)**(-a))/s
>>> cdf(X)(z)
 Piecewise((exp(-((-m + z)/s)**(-a)), m <= z), (0, True))

References

R759

https://en.wikipedia.org/wiki/Fr%C3%A9chet_distribution

sympy.stats.Gamma(name, k, theta)[source]

Create a continuous random variable with a Gamma distribution.

The density of the Gamma distribution is given by

\[f(x) := \frac{1}{\Gamma(k) \theta^k} x^{k - 1} e^{-\frac{x}{\theta}}\]

with \(x \in [0,1]\).

Parameters

k : Real number, \(k > 0\), a shape

theta : Real number, \(\theta > 0\), a scale

Returns

RandomSymbol

Examples

>>> from sympy.stats import Gamma, density, cdf, E, variance
>>> from sympy import Symbol, pprint, simplify
>>> k = Symbol("k", positive=True)
>>> theta = Symbol("theta", positive=True)
>>> z = Symbol("z")
>>> X = Gamma("x", k, theta)
>>> D = density(X)(z)
>>> pprint(D, use_unicode=False)
                  -z
                -----
     -k  k - 1  theta
theta  *z     *e
---------------------
       Gamma(k)
>>> C = cdf(X, meijerg=True)(z)
>>> pprint(C, use_unicode=False)
/            /     z  \
|k*lowergamma|k, -----|
|            \   theta/
<----------------------  for z >= 0
|     Gamma(k + 1)
|
\          0             otherwise
>>> E(X)
k*theta
>>> V = simplify(variance(X))
>>> pprint(V, use_unicode=False)
       2
k*theta

References

R760

https://en.wikipedia.org/wiki/Gamma_distribution

R761

http://mathworld.wolfram.com/GammaDistribution.html

sympy.stats.GammaInverse(name, a, b)[source]

Create a continuous random variable with an inverse Gamma distribution.

The density of the inverse Gamma distribution is given by

\[f(x) := \frac{\beta^\alpha}{\Gamma(\alpha)} x^{-\alpha - 1} \exp\left(\frac{-\beta}{x}\right)\]

with \(x > 0\).

Parameters

a : Real number, \(a > 0\) a shape

b : Real number, \(b > 0\) a scale

Returns

RandomSymbol

Examples

>>> from sympy.stats import GammaInverse, density, cdf
>>> from sympy import Symbol, pprint
>>> a = Symbol("a", positive=True)
>>> b = Symbol("b", positive=True)
>>> z = Symbol("z")
>>> X = GammaInverse("x", a, b)
>>> D = density(X)(z)
>>> pprint(D, use_unicode=False)
            -b
            ---
 a  -a - 1   z
b *z      *e
---------------
   Gamma(a)
>>> cdf(X)(z)
Piecewise((uppergamma(a, b/z)/gamma(a), z > 0), (0, True))

References

R762

https://en.wikipedia.org/wiki/Inverse-gamma_distribution

sympy.stats.Gompertz(name, b, eta)[source]

Create a Continuous Random Variable with Gompertz distribution.

The density of the Gompertz distribution is given by

\[f(x) := b \eta e^{b x} e^{\eta} \exp \left(-\eta e^{bx} \right)\]

with :math: ‘x in [0, inf)’.

Parameters

b: Real number, ‘b > 0’ a scale

eta: Real number, ‘eta > 0’ a shape

Returns

RandomSymbol

Examples

>>> from sympy.stats import Gompertz, density
>>> from sympy import Symbol
>>> b = Symbol("b", positive=True)
>>> eta = Symbol("eta", positive=True)
>>> z = Symbol("z")
>>> X = Gompertz("x", b, eta)
>>> density(X)(z)
b*eta*exp(eta)*exp(b*z)*exp(-eta*exp(b*z))

References

R763

https://en.wikipedia.org/wiki/Gompertz_distribution

sympy.stats.Gumbel(name, beta, mu, minimum=False)[source]

Create a Continuous Random Variable with Gumbel distribution.

The density of the Gumbel distribution is given by

For Maximum

\[f(x) := \dfrac{1}{\beta} \exp \left( -\dfrac{x-\mu}{\beta} - \exp \left( -\dfrac{x - \mu}{\beta} \right) \right)\]

with \(x \in [ - \infty, \infty ]\).

For Minimum

\[f(x) := \frac{e^{- e^{\frac{- \mu + x}{\beta}} + \frac{- \mu + x}{\beta}}}{\beta}\]

with \(x \in [ - \infty, \infty ]\).

Parameters

mu : Real number, ‘mu’ is a location

beta : Real number, ‘beta > 0’ is a scale

minimum : Boolean, by default, False, set to True for enabling minimum distribution

Returns

RandomSymbol

Examples

>>> from sympy.stats import Gumbel, density, cdf
>>> from sympy import Symbol
>>> x = Symbol("x")
>>> mu = Symbol("mu")
>>> beta = Symbol("beta", positive=True)
>>> X = Gumbel("x", beta, mu)
>>> density(X)(x)
exp(-exp(-(-mu + x)/beta) - (-mu + x)/beta)/beta
>>> cdf(X)(x)
exp(-exp(-(-mu + x)/beta))

References

R764

http://mathworld.wolfram.com/GumbelDistribution.html

R765

https://en.wikipedia.org/wiki/Gumbel_distribution

R766

http://www.mathwave.com/help/easyfit/html/analyses/distributions/gumbel_max.html

R767

http://www.mathwave.com/help/easyfit/html/analyses/distributions/gumbel_min.html

sympy.stats.Kumaraswamy(name, a, b)[source]

Create a Continuous Random Variable with a Kumaraswamy distribution.

The density of the Kumaraswamy distribution is given by

\[f(x) := a b x^{a-1} (1-x^a)^{b-1}\]

with \(x \in [0,1]\).

Parameters

a : Real number, \(a > 0\) a shape

b : Real number, \(b > 0\) a shape

Returns

RandomSymbol

Examples

>>> from sympy.stats import Kumaraswamy, density, cdf
>>> from sympy import Symbol, pprint
>>> a = Symbol("a", positive=True)
>>> b = Symbol("b", positive=True)
>>> z = Symbol("z")
>>> X = Kumaraswamy("x", a, b)
>>> D = density(X)(z)
>>> pprint(D, use_unicode=False)
                   b - 1
     a - 1 /     a\
a*b*z     *\1 - z /
>>> cdf(X)(z)
Piecewise((0, z < 0), (1 - (1 - z**a)**b, z <= 1), (1, True))

References

R768

https://en.wikipedia.org/wiki/Kumaraswamy_distribution

sympy.stats.Laplace(name, mu, b)[source]

Create a continuous random variable with a Laplace distribution.

The density of the Laplace distribution is given by

\[f(x) := \frac{1}{2 b} \exp \left(-\frac{|x-\mu|}b \right)\]
Parameters

mu : Real number or a list/matrix, the location (mean) or the

location vector

b : Real number or a positive definite matrix, representing a scale

or the covariance matrix.

Returns

RandomSymbol

Examples

>>> from sympy.stats import Laplace, density, cdf
>>> from sympy import Symbol, pprint
>>> mu = Symbol("mu")
>>> b = Symbol("b", positive=True)
>>> z = Symbol("z")
>>> X = Laplace("x", mu, b)
>>> density(X)(z)
exp(-Abs(mu - z)/b)/(2*b)
>>> cdf(X)(z)
Piecewise((exp((-mu + z)/b)/2, mu > z), (1 - exp((mu - z)/b)/2, True))
>>> L = Laplace('L', [1, 2], [[1, 0], [0, 1]])
>>> pprint(density(L)(1, 2), use_unicode=False)
 5        /     ____\
e *besselk\0, \/ 35 /
---------------------
          pi

References

R769

https://en.wikipedia.org/wiki/Laplace_distribution

R770

http://mathworld.wolfram.com/LaplaceDistribution.html

sympy.stats.Levy(name, mu, c)[source]

Create a continuous random variable with a Levy distribution.

The density of the Levy distribution is given by

\[f(x) := \sqrt(\frac{c}{2 \pi}) \frac{\exp -\frac{c}{2 (x - \mu)}}{(x - \mu)^{3/2}}\]
Parameters

mu : Real number, the location parameter

c : Real number, \(c > 0\), a scale parameter

Returns

RandomSymbol

Examples

>>> from sympy.stats import Levy, density, cdf
>>> from sympy import Symbol
>>> mu = Symbol("mu", real=True)
>>> c = Symbol("c", positive=True)
>>> z = Symbol("z")
>>> X = Levy("x", mu, c)
>>> density(X)(z)
sqrt(2)*sqrt(c)*exp(-c/(-2*mu + 2*z))/(2*sqrt(pi)*(-mu + z)**(3/2))
>>> cdf(X)(z)
erfc(sqrt(c)*sqrt(1/(-2*mu + 2*z)))

References

R771

https://en.wikipedia.org/wiki/L%C3%A9vy_distribution

R772

http://mathworld.wolfram.com/LevyDistribution.html

sympy.stats.Logistic(name, mu, s)[source]

Create a continuous random variable with a logistic distribution.

The density of the logistic distribution is given by

\[f(x) := \frac{e^{-(x-\mu)/s}} {s\left(1+e^{-(x-\mu)/s}\right)^2}\]
Parameters

mu : Real number, the location (mean)

s : Real number, \(s > 0\) a scale

Returns

RandomSymbol

Examples

>>> from sympy.stats import Logistic, density, cdf
>>> from sympy import Symbol
>>> mu = Symbol("mu", real=True)
>>> s = Symbol("s", positive=True)
>>> z = Symbol("z")
>>> X = Logistic("x", mu, s)
>>> density(X)(z)
exp((mu - z)/s)/(s*(exp((mu - z)/s) + 1)**2)
>>> cdf(X)(z)
1/(exp((mu - z)/s) + 1)

References

R773

https://en.wikipedia.org/wiki/Logistic_distribution

R774

http://mathworld.wolfram.com/LogisticDistribution.html

sympy.stats.LogLogistic(name, alpha, beta)[source]

Create a continuous random variable with a log-logistic distribution. The distribution is unimodal when \(beta > 1\).

The density of the log-logistic distribution is given by

\[f(x) := \frac{(\frac{\beta}{\alpha})(\frac{x}{\alpha})^{\beta - 1}} {(1 + (\frac{x}{\alpha})^{\beta})^2}\]
Parameters

alpha : Real number, \(\alpha > 0\), scale parameter and median of distribution

beta : Real number, \(\beta > 0\) a shape parameter

Returns

RandomSymbol

Examples

>>> from sympy.stats import LogLogistic, density, cdf, quantile
>>> from sympy import Symbol, pprint
>>> alpha = Symbol("alpha", real=True, positive=True)
>>> beta = Symbol("beta", real=True, positive=True)
>>> p = Symbol("p")
>>> z = Symbol("z", positive=True)
>>> X = LogLogistic("x", alpha, beta)
>>> D = density(X)(z)
>>> pprint(D, use_unicode=False)
              beta - 1
       /  z  \
  beta*|-----|
       \alpha/
------------------------
                       2
      /       beta    \
      |/  z  \        |
alpha*||-----|     + 1|
      \\alpha/        /
>>> cdf(X)(z)
1/(1 + (z/alpha)**(-beta))
>>> quantile(X)(p)
alpha*(p/(1 - p))**(1/beta)

References

R775

https://en.wikipedia.org/wiki/Log-logistic_distribution

sympy.stats.LogNormal(name, mean, std)[source]

Create a continuous random variable with a log-normal distribution.

The density of the log-normal distribution is given by

\[f(x) := \frac{1}{x\sqrt{2\pi\sigma^2}} e^{-\frac{\left(\ln x-\mu\right)^2}{2\sigma^2}}\]

with \(x \geq 0\).

Parameters

mu : Real number, the log-scale

sigma : Real number, \(\sigma^2 > 0\) a shape

Returns

RandomSymbol

Examples

>>> from sympy.stats import LogNormal, density
>>> from sympy import Symbol, pprint
>>> mu = Symbol("mu", real=True)
>>> sigma = Symbol("sigma", positive=True)
>>> z = Symbol("z")
>>> X = LogNormal("x", mu, sigma)
>>> D = density(X)(z)
>>> pprint(D, use_unicode=False)
                      2
       -(-mu + log(z))
       -----------------
                  2
  ___      2*sigma
\/ 2 *e
------------------------
        ____
    2*\/ pi *sigma*z
>>> X = LogNormal('x', 0, 1) # Mean 0, standard deviation 1
>>> density(X)(z)
sqrt(2)*exp(-log(z)**2/2)/(2*sqrt(pi)*z)

References

R776

https://en.wikipedia.org/wiki/Lognormal

R777

http://mathworld.wolfram.com/LogNormalDistribution.html

sympy.stats.Lomax(name, alpha, lamda)[source]

Create a continuous random variable with a Lomax distribution.

The density of the Lomax distribution is given by

\[f(x) := \frac{\alpha}{\lambda}\left[1+\frac{x}{\lambda}\right]^{-(\alpha+1)}\]
Parameters

alpha : Real Number, \(alpha > 0\)

Shape parameter

lamda : Real Number, \(lamda > 0\)

Scale parameter

Returns

RandomSymbol

Examples

>>> from sympy.stats import Lomax, density, cdf, E
>>> from sympy import symbols
>>> a, l = symbols('a, l', positive=True)
>>> X = Lomax('X', a, l)
>>> x = symbols('x')
>>> density(X)(x)
a*(1 + x/l)**(-a - 1)/l
>>> cdf(X)(x)
Piecewise((1 - (1 + x/l)**(-a), x >= 0), (0, True))
>>> a = 2
>>> X = Lomax('X', a, l)
>>> E(X)
l

References

R778

https://en.wikipedia.org/wiki/Lomax_distribution

sympy.stats.Maxwell(name, a)[source]

Create a continuous random variable with a Maxwell distribution.

The density of the Maxwell distribution is given by

\[f(x) := \sqrt{\frac{2}{\pi}} \frac{x^2 e^{-x^2/(2a^2)}}{a^3}\]

with \(x \geq 0\).

Parameters

a : Real number, \(a > 0\)

Returns

RandomSymbol

Examples

>>> from sympy.stats import Maxwell, density, E, variance
>>> from sympy import Symbol, simplify
>>> a = Symbol("a", positive=True)
>>> z = Symbol("z")
>>> X = Maxwell("x", a)
>>> density(X)(z)
sqrt(2)*z**2*exp(-z**2/(2*a**2))/(sqrt(pi)*a**3)
>>> E(X)
2*sqrt(2)*a/sqrt(pi)
>>> simplify(variance(X))
a**2*(-8 + 3*pi)/pi

References

R779

https://en.wikipedia.org/wiki/Maxwell_distribution

R780

http://mathworld.wolfram.com/MaxwellDistribution.html

sympy.stats.Moyal(name, mu, sigma)[source]

Create a continuous random variable with a Moyal distribution. The density of the Moyal distribution is given by

\[f(x) := \frac{\exp-\frac{1}{2}\exp-\frac{x-\mu}{\sigma}-\frac{x-\mu}{2\sigma}}{\sqrt{2\pi}\sigma}\]

with \(x \in \mathbb{R}\).

Parameters

mu : Real number

Location parameter

sigma : Real positive number

Scale parameter

Returns

RandomSymbol

Examples

>>> from sympy.stats import Moyal, density, cdf
>>> from sympy import Symbol, simplify
>>> mu = Symbol("mu", real=True)
>>> sigma = Symbol("sigma", positive=True, real=True)
>>> z = Symbol("z")
>>> X = Moyal("x", mu, sigma)
>>> density(X)(z)
sqrt(2)*exp(-exp((mu - z)/sigma)/2 - (-mu + z)/(2*sigma))/(2*sqrt(pi)*sigma)
>>> simplify(cdf(X)(z))
1 - erf(sqrt(2)*exp((mu - z)/(2*sigma))/2)

References

R781

https://reference.wolfram.com/language/ref/MoyalDistribution.html

R782

http://www.stat.rice.edu/~dobelman/textfiles/DistributionsHandbook.pdf

sympy.stats.Nakagami(name, mu, omega)[source]

Create a continuous random variable with a Nakagami distribution.

The density of the Nakagami distribution is given by

\[f(x) := \frac{2\mu^\mu}{\Gamma(\mu)\omega^\mu} x^{2\mu-1} \exp\left(-\frac{\mu}{\omega}x^2 \right)\]

with \(x > 0\).

Parameters

mu : Real number, \(\mu \geq \frac{1}{2}\) a shape

omega : Real number, \(\omega > 0\), the spread

Returns

RandomSymbol

Examples

>>> from sympy.stats import Nakagami, density, E, variance, cdf
>>> from sympy import Symbol, simplify, pprint
>>> mu = Symbol("mu", positive=True)
>>> omega = Symbol("omega", positive=True)
>>> z = Symbol("z")
>>> X = Nakagami("x", mu, omega)
>>> D = density(X)(z)
>>> pprint(D, use_unicode=False)
                                2
                           -mu*z
                           -------
    mu      -mu  2*mu - 1  omega
2*mu  *omega   *z        *e
----------------------------------
            Gamma(mu)
>>> simplify(E(X))
sqrt(mu)*sqrt(omega)*gamma(mu + 1/2)/gamma(mu + 1)
>>> V = simplify(variance(X))
>>> pprint(V, use_unicode=False)
                    2
         omega*Gamma (mu + 1/2)
omega - -----------------------
        Gamma(mu)*Gamma(mu + 1)
>>> cdf(X)(z)
Piecewise((lowergamma(mu, mu*z**2/omega)/gamma(mu), z > 0),
        (0, True))

References

R783

https://en.wikipedia.org/wiki/Nakagami_distribution

sympy.stats.Normal(name, mean, std)[source]

Create a continuous random variable with a Normal distribution.

The density of the Normal distribution is given by

\[f(x) := \frac{1}{\sigma\sqrt{2\pi}} e^{ -\frac{(x-\mu)^2}{2\sigma^2} }\]
Parameters

mu : Real number or a list representing the mean or the mean vector

sigma : Real number or a positive definite square matrix,

\(\sigma^2 > 0\) the variance

Returns

RandomSymbol

Examples

>>> from sympy.stats import Normal, density, E, std, cdf, skewness, quantile, marginal_distribution
>>> from sympy import Symbol, simplify, pprint
>>> mu = Symbol("mu")
>>> sigma = Symbol("sigma", positive=True)
>>> z = Symbol("z")
>>> y = Symbol("y")
>>> p = Symbol("p")
>>> X = Normal("x", mu, sigma)
>>> density(X)(z)
sqrt(2)*exp(-(-mu + z)**2/(2*sigma**2))/(2*sqrt(pi)*sigma)
>>> C = simplify(cdf(X))(z) # it needs a little more help...
>>> pprint(C, use_unicode=False)
   /  ___          \
   |\/ 2 *(-mu + z)|
erf|---------------|
   \    2*sigma    /   1
-------------------- + -
         2             2
>>> quantile(X)(p)
mu + sqrt(2)*sigma*erfinv(2*p - 1)
>>> simplify(skewness(X))
0
>>> X = Normal("x", 0, 1) # Mean 0, standard deviation 1
>>> density(X)(z)
sqrt(2)*exp(-z**2/2)/(2*sqrt(pi))
>>> E(2*X + 1)
1
>>> simplify(std(2*X + 1))
2
>>> m = Normal('X', [1, 2], [[2, 1], [1, 2]])
>>> pprint(density(m)(y, z), use_unicode=False)
       /1   y\ /2*y   z\   /    z\ /  y   2*z    \
       |- - -|*|--- - -| + |1 - -|*|- - + --- - 1|
  ___  \2   2/ \ 3    3/   \    2/ \  3    3     /
\/ 3 *e
--------------------------------------------------
                       6*pi
>>> marginal_distribution(m, m[0])(1)
 1/(2*sqrt(pi))

References

R784

https://en.wikipedia.org/wiki/Normal_distribution

R785

http://mathworld.wolfram.com/NormalDistributionFunction.html

sympy.stats.Pareto(name, xm, alpha)[source]

Create a continuous random variable with the Pareto distribution.

The density of the Pareto distribution is given by

\[f(x) := \frac{\alpha\,x_m^\alpha}{x^{\alpha+1}}\]

with \(x \in [x_m,\infty]\).

Parameters

xm : Real number, \(x_m > 0\), a scale

alpha : Real number, \(\alpha > 0\), a shape

Returns

RandomSymbol

Examples

>>> from sympy.stats import Pareto, density
>>> from sympy import Symbol
>>> xm = Symbol("xm", positive=True)
>>> beta = Symbol("beta", positive=True)
>>> z = Symbol("z")
>>> X = Pareto("x", xm, beta)
>>> density(X)(z)
beta*xm**beta*z**(-beta - 1)

References

R786

https://en.wikipedia.org/wiki/Pareto_distribution

R787

http://mathworld.wolfram.com/ParetoDistribution.html

sympy.stats.PowerFunction(name, alpha, a, b)[source]

Creates a continuous random variable with a Power Function Distribution

The density of PowerFunction distribution is given by

\[f(x) := \frac{{\alpha}(x - a)^{\alpha - 1}}{(b - a)^{\alpha}}\]

with \(x \in [a,b]\).

Parameters

alpha: Positive number, `0 < alpha` the shape paramater

a : Real number, \(-\infty < a\) the left boundary

b : Real number, \(a < b < \infty\) the right boundary

Returns

RandomSymbol

Examples

>>> from sympy.stats import PowerFunction, density, cdf, E, variance
>>> from sympy import Symbol
>>> alpha = Symbol("alpha", positive=True)
>>> a = Symbol("a", real=True)
>>> b = Symbol("b", real=True)
>>> z = Symbol("z")
>>> X = PowerFunction("X", 2, a, b)
>>> density(X)(z)
(-2*a + 2*z)/(-a + b)**2
>>> cdf(X)(z)
Piecewise((a**2/(a**2 - 2*a*b + b**2) - 2*a*z/(a**2 - 2*a*b + b**2) +
z**2/(a**2 - 2*a*b + b**2), a <= z), (0, True))
>>> alpha = 2
>>> a = 0
>>> b = 1
>>> Y = PowerFunction("Y", alpha, a, b)
>>> E(Y)
2/3
>>> variance(Y)
1/18

References

R788

http://www.mathwave.com/help/easyfit/html/analyses/distributions/power_func.html

sympy.stats.QuadraticU(name, a, b)[source]

Create a Continuous Random Variable with a U-quadratic distribution.

The density of the U-quadratic distribution is given by

\[f(x) := \alpha (x-\beta)^2\]

with \(x \in [a,b]\).

Parameters

a : Real number

b : Real number, \(a < b\)

Returns

RandomSymbol

Examples

>>> from sympy.stats import QuadraticU, density
>>> from sympy import Symbol, pprint
>>> a = Symbol("a", real=True)
>>> b = Symbol("b", real=True)
>>> z = Symbol("z")
>>> X = QuadraticU("x", a, b)
>>> D = density(X)(z)
>>> pprint(D, use_unicode=False)
/                2
|   /  a   b    \
|12*|- - - - + z|
|   \  2   2    /
<-----------------  for And(b >= z, a <= z)
|            3
|    (-a + b)
|
\        0                 otherwise

References

R789

https://en.wikipedia.org/wiki/U-quadratic_distribution

sympy.stats.RaisedCosine(name, mu, s)[source]

Create a Continuous Random Variable with a raised cosine distribution.

The density of the raised cosine distribution is given by

\[f(x) := \frac{1}{2s}\left(1+\cos\left(\frac{x-\mu}{s}\pi\right)\right)\]

with \(x \in [\mu-s,\mu+s]\).

Parameters

mu : Real number

s : Real number, \(s > 0\)

Returns

RandomSymbol

Examples

>>> from sympy.stats import RaisedCosine, density
>>> from sympy import Symbol, pprint
>>> mu = Symbol("mu", real=True)
>>> s = Symbol("s", positive=True)
>>> z = Symbol("z")
>>> X = RaisedCosine("x", mu, s)
>>> D = density(X)(z)
>>> pprint(D, use_unicode=False)
/   /pi*(-mu + z)\
|cos|------------| + 1
|   \     s      /
<---------------------  for And(z >= mu - s, z <= mu + s)
|         2*s
|
\          0                        otherwise

References

R790

https://en.wikipedia.org/wiki/Raised_cosine_distribution

sympy.stats.Rayleigh(name, sigma)[source]

Create a continuous random variable with a Rayleigh distribution.

The density of the Rayleigh distribution is given by

\[f(x) := \frac{x}{\sigma^2} e^{-x^2/2\sigma^2}\]

with \(x > 0\).

Parameters

sigma : Real number, \(\sigma > 0\)

Returns

RandomSymbol

Examples

>>> from sympy.stats import Rayleigh, density, E, variance
>>> from sympy import Symbol
>>> sigma = Symbol("sigma", positive=True)
>>> z = Symbol("z")
>>> X = Rayleigh("x", sigma)
>>> density(X)(z)
z*exp(-z**2/(2*sigma**2))/sigma**2
>>> E(X)
sqrt(2)*sqrt(pi)*sigma/2
>>> variance(X)
-pi*sigma**2/2 + 2*sigma**2

References

R791

https://en.wikipedia.org/wiki/Rayleigh_distribution

R792

http://mathworld.wolfram.com/RayleighDistribution.html

sympy.stats.Reciprocal(name, a, b)[source]

Creates a continuous random variable with a reciprocal distribution.

Parameters

a : Real number, \(0 < a\)

b : Real number, \(a < b\)

Returns

RandomSymbol

Examples

>>> from sympy.stats import Reciprocal, density, cdf
>>> from sympy import symbols
>>> a, b, x = symbols('a, b, x', positive=True)
>>> R = Reciprocal('R', a, b)
>>> density(R)(x)
1/(x*(-log(a) + log(b)))
>>> cdf(R)(x)
Piecewise((log(a)/(log(a) - log(b)) - log(x)/(log(a) - log(b)), a <= x), (0, True))

Reference

R793

https://en.wikipedia.org/wiki/Reciprocal_distribution

sympy.stats.StudentT(name, nu)[source]

Create a continuous random variable with a student’s t distribution.

The density of the student’s t distribution is given by

\[f(x) := \frac{\Gamma \left(\frac{\nu+1}{2} \right)} {\sqrt{\nu\pi}\Gamma \left(\frac{\nu}{2} \right)} \left(1+\frac{x^2}{\nu} \right)^{-\frac{\nu+1}{2}}\]
Parameters

nu : Real number, \(\nu > 0\), the degrees of freedom

Returns

RandomSymbol

Examples

>>> from sympy.stats import StudentT, density, cdf
>>> from sympy import Symbol, pprint
>>> nu = Symbol("nu", positive=True)
>>> z = Symbol("z")
>>> X = StudentT("x", nu)
>>> D = density(X)(z)
>>> pprint(D, use_unicode=False)
           nu   1
         - -- - -
           2    2
 /     2\
 |    z |
 |1 + --|
 \    nu/
-----------------
  ____  /     nu\
\/ nu *B|1/2, --|
        \     2 /
>>> cdf(X)(z)
1/2 + z*gamma(nu/2 + 1/2)*hyper((1/2, nu/2 + 1/2), (3/2,),
                            -z**2/nu)/(sqrt(pi)*sqrt(nu)*gamma(nu/2))

References

R794

https://en.wikipedia.org/wiki/Student_t-distribution

R795

http://mathworld.wolfram.com/Studentst-Distribution.html

sympy.stats.ShiftedGompertz(name, b, eta)[source]

Create a continuous random variable with a Shifted Gompertz distribution.

The density of the Shifted Gompertz distribution is given by

\[f(x) := b e^{-b x} e^{-\eta \exp(-b x)} \left[1 + \eta(1 - e^(-bx)) \right]\]

with :math: ‘x in [0, inf)’.

Parameters

b: Real number, ‘b > 0’ a scale

eta: Real number, ‘eta > 0’ a shape

Returns

RandomSymbol

Examples

>>> from sympy.stats import ShiftedGompertz, density
>>> from sympy import Symbol
>>> b = Symbol("b", positive=True)
>>> eta = Symbol("eta", positive=True)
>>> x = Symbol("x")
>>> X = ShiftedGompertz("x", b, eta)
>>> density(X)(x)
b*(eta*(1 - exp(-b*x)) + 1)*exp(-b*x)*exp(-eta*exp(-b*x))

References

R796

https://en.wikipedia.org/wiki/Shifted_Gompertz_distribution

sympy.stats.Trapezoidal(name, a, b, c, d)[source]

Create a continuous random variable with a trapezoidal distribution.

The density of the trapezoidal distribution is given by

\[\begin{split}f(x) := \begin{cases} 0 & \mathrm{for\ } x < a, \\ \frac{2(x-a)}{(b-a)(d+c-a-b)} & \mathrm{for\ } a \le x < b, \\ \frac{2}{d+c-a-b} & \mathrm{for\ } b \le x < c, \\ \frac{2(d-x)}{(d-c)(d+c-a-b)} & \mathrm{for\ } c \le x < d, \\ 0 & \mathrm{for\ } d < x. \end{cases}\end{split}\]
Parameters

a : Real number, \(a < d\)

b : Real number, \(a <= b < c\)

c : Real number, \(b < c <= d\)

d : Real number

Returns

RandomSymbol

Examples

>>> from sympy.stats import Trapezoidal, density
>>> from sympy import Symbol, pprint
>>> a = Symbol("a")
>>> b = Symbol("b")
>>> c = Symbol("c")
>>> d = Symbol("d")
>>> z = Symbol("z")
>>> X = Trapezoidal("x", a,b,c,d)
>>> pprint(density(X)(z), use_unicode=False)
/        -2*a + 2*z
|-------------------------  for And(a <= z, b > z)
|(-a + b)*(-a - b + c + d)
|
|           2
|     --------------        for And(b <= z, c > z)
<     -a - b + c + d
|
|        2*d - 2*z
|-------------------------  for And(d >= z, c <= z)
|(-c + d)*(-a - b + c + d)
|
\            0                     otherwise

References

R797

https://en.wikipedia.org/wiki/Trapezoidal_distribution

sympy.stats.Triangular(name, a, b, c)[source]

Create a continuous random variable with a triangular distribution.

The density of the triangular distribution is given by

\[\begin{split}f(x) := \begin{cases} 0 & \mathrm{for\ } x < a, \\ \frac{2(x-a)}{(b-a)(c-a)} & \mathrm{for\ } a \le x < c, \\ \frac{2}{b-a} & \mathrm{for\ } x = c, \\ \frac{2(b-x)}{(b-a)(b-c)} & \mathrm{for\ } c < x \le b, \\ 0 & \mathrm{for\ } b < x. \end{cases}\end{split}\]
Parameters

a : Real number, \(a \in \left(-\infty, \infty\right)\)

b : Real number, \(a < b\)

c : Real number, \(a \leq c \leq b\)

Returns

RandomSymbol

Examples

>>> from sympy.stats import Triangular, density
>>> from sympy import Symbol, pprint
>>> a = Symbol("a")
>>> b = Symbol("b")
>>> c = Symbol("c")
>>> z = Symbol("z")
>>> X = Triangular("x", a,b,c)
>>> pprint(density(X)(z), use_unicode=False)
/    -2*a + 2*z
|-----------------  for And(a <= z, c > z)
|(-a + b)*(-a + c)
|
|       2
|     ------              for c = z
<     -a + b
|
|   2*b - 2*z
|----------------   for And(b >= z, c < z)
|(-a + b)*(b - c)
|
\        0                otherwise

References

R798

https://en.wikipedia.org/wiki/Triangular_distribution

R799

http://mathworld.wolfram.com/TriangularDistribution.html

sympy.stats.Uniform(name, left, right)[source]

Create a continuous random variable with a uniform distribution.

The density of the uniform distribution is given by

\[\begin{split}f(x) := \begin{cases} \frac{1}{b - a} & \text{for } x \in [a,b] \\ 0 & \text{otherwise} \end{cases}\end{split}\]

with \(x \in [a,b]\).

Parameters

a : Real number, \(-\infty < a\) the left boundary

b : Real number, \(a < b < \infty\) the right boundary

Returns

RandomSymbol

Examples

>>> from sympy.stats import Uniform, density, cdf, E, variance
>>> from sympy import Symbol, simplify
>>> a = Symbol("a", negative=True)
>>> b = Symbol("b", positive=True)
>>> z = Symbol("z")
>>> X = Uniform("x", a, b)
>>> density(X)(z)
Piecewise((1/(-a + b), (b >= z) & (a <= z)), (0, True))
>>> cdf(X)(z)
Piecewise((0, a > z), ((-a + z)/(-a + b), b >= z), (1, True))
>>> E(X)
a/2 + b/2
>>> simplify(variance(X))
a**2/12 - a*b/6 + b**2/12

References

R800

https://en.wikipedia.org/wiki/Uniform_distribution_%28continuous%29

R801

http://mathworld.wolfram.com/UniformDistribution.html

sympy.stats.UniformSum(name, n)[source]

Create a continuous random variable with an Irwin-Hall distribution.

The probability distribution function depends on a single parameter \(n\) which is an integer.

The density of the Irwin-Hall distribution is given by

\[f(x) := \frac{1}{(n-1)!}\sum_{k=0}^{\left\lfloor x\right\rfloor}(-1)^k \binom{n}{k}(x-k)^{n-1}\]
Parameters

n : A positive Integer, \(n > 0\)

Returns

RandomSymbol

Examples

>>> from sympy.stats import UniformSum, density, cdf
>>> from sympy import Symbol, pprint
>>> n = Symbol("n", integer=True)
>>> z = Symbol("z")
>>> X = UniformSum("x", n)
>>> D = density(X)(z)
>>> pprint(D, use_unicode=False)
floor(z)
  ___
  \  `
   \         k         n - 1 /n\
    )    (-1) *(-k + z)     *| |
   /                         \k/
  /__,
 k = 0
--------------------------------
            (n - 1)!
>>> cdf(X)(z)
Piecewise((0, z < 0), (Sum((-1)**_k*(-_k + z)**n*binomial(n, _k),
                (_k, 0, floor(z)))/factorial(n), n >= z), (1, True))

Compute cdf with specific ‘x’ and ‘n’ values as follows : >>> cdf(UniformSum(“x”, 5), evaluate=False)(2).doit() 9/40

The argument evaluate=False prevents an attempt at evaluation of the sum for general n, before the argument 2 is passed.

References

R802

https://en.wikipedia.org/wiki/Uniform_sum_distribution

R803

http://mathworld.wolfram.com/UniformSumDistribution.html

sympy.stats.VonMises(name, mu, k)[source]

Create a Continuous Random Variable with a von Mises distribution.

The density of the von Mises distribution is given by

\[f(x) := \frac{e^{\kappa\cos(x-\mu)}}{2\pi I_0(\kappa)}\]

with \(x \in [0,2\pi]\).

Parameters

mu : Real number, measure of location

k : Real number, measure of concentration

Returns

RandomSymbol

Examples

>>> from sympy.stats import VonMises, density
>>> from sympy import Symbol, pprint
>>> mu = Symbol("mu")
>>> k = Symbol("k", positive=True)
>>> z = Symbol("z")
>>> X = VonMises("x", mu, k)
>>> D = density(X)(z)
>>> pprint(D, use_unicode=False)
     k*cos(mu - z)
    e
------------------
2*pi*besseli(0, k)

References

R804

https://en.wikipedia.org/wiki/Von_Mises_distribution

R805

http://mathworld.wolfram.com/vonMisesDistribution.html

sympy.stats.Wald(name, mean, shape)[source]

Create a continuous random variable with an Inverse Gaussian distribution. Inverse Gaussian distribution is also known as Wald distribution.

The density of the Inverse Gaussian distribution is given by

\[f(x) := \sqrt{\frac{\lambda}{2\pi x^3}} e^{-\frac{\lambda(x-\mu)^2}{2x\mu^2}}\]
Parameters

mu : Positive number representing the mean

lambda : Positive number representing the shape parameter

Returns

RandomSymbol

Examples

>>> from sympy.stats import GaussianInverse, density, E, std, skewness
>>> from sympy import Symbol, pprint
>>> mu = Symbol("mu", positive=True)
>>> lamda = Symbol("lambda", positive=True)
>>> z = Symbol("z", positive=True)
>>> X = GaussianInverse("x", mu, lamda)
>>> D = density(X)(z)
>>> pprint(D, use_unicode=False)
                                   2
                  -lambda*(-mu + z)
                  -------------------
                            2
  ___   ________        2*mu *z
\/ 2 *\/ lambda *e
-------------------------------------
                ____  3/2
            2*\/ pi *z
>>> E(X)
mu
>>> std(X).expand()
mu**(3/2)/sqrt(lambda)
>>> skewness(X).expand()
3*sqrt(mu)/sqrt(lambda)

References

R806

https://en.wikipedia.org/wiki/Inverse_Gaussian_distribution

R807

http://mathworld.wolfram.com/InverseGaussianDistribution.html

sympy.stats.Weibull(name, alpha, beta)[source]

Create a continuous random variable with a Weibull distribution.

The density of the Weibull distribution is given by

\[\begin{split}f(x) := \begin{cases} \frac{k}{\lambda}\left(\frac{x}{\lambda}\right)^{k-1} e^{-(x/\lambda)^{k}} & x\geq0\\ 0 & x<0 \end{cases}\end{split}\]
Parameters

lambda : Real number, \(\lambda > 0\) a scale

k : Real number, \(k > 0\) a shape

Returns

RandomSymbol

Examples

>>> from sympy.stats import Weibull, density, E, variance
>>> from sympy import Symbol, simplify
>>> l = Symbol("lambda", positive=True)
>>> k = Symbol("k", positive=True)
>>> z = Symbol("z")
>>> X = Weibull("x", l, k)
>>> density(X)(z)
k*(z/lambda)**(k - 1)*exp(-(z/lambda)**k)/lambda
>>> simplify(E(X))
lambda*gamma(1 + 1/k)
>>> simplify(variance(X))
lambda**2*(-gamma(1 + 1/k)**2 + gamma(1 + 2/k))

References

R808

https://en.wikipedia.org/wiki/Weibull_distribution

R809

http://mathworld.wolfram.com/WeibullDistribution.html

sympy.stats.WignerSemicircle(name, R)[source]

Create a continuous random variable with a Wigner semicircle distribution.

The density of the Wigner semicircle distribution is given by

\[f(x) := \frac2{\pi R^2}\,\sqrt{R^2-x^2}\]

with \(x \in [-R,R]\).

Parameters

R : Real number, \(R > 0\), the radius

Returns

A \(RandomSymbol\).

Examples

>>> from sympy.stats import WignerSemicircle, density, E
>>> from sympy import Symbol
>>> R = Symbol("R", positive=True)
>>> z = Symbol("z")
>>> X = WignerSemicircle("x", R)
>>> density(X)(z)
2*sqrt(R**2 - z**2)/(pi*R**2)
>>> E(X)
0

References

R810

https://en.wikipedia.org/wiki/Wigner_semicircle_distribution

R811

http://mathworld.wolfram.com/WignersSemicircleLaw.html

sympy.stats.ContinuousRV(symbol, density, set=Interval(- oo, oo), **kwargs)[source]

Create a Continuous Random Variable given the following:

Parameters

symbol : Symbol

Represents name of the random variable.

density : Expression containing symbol

Represents probability density function.

set : set/Interval

Represents the region where the pdf is valid, by default is real line.

check : bool

If True, it will check whether the given density integrates to 1 over the given set. If False, it will not perform this check. Default is False.

Returns

RandomSymbol

Many common continuous random variable types are already implemented.

This function should be necessary only very rarely.

Examples

>>> from sympy import Symbol, sqrt, exp, pi
>>> from sympy.stats import ContinuousRV, P, E
>>> x = Symbol("x")
>>> pdf = sqrt(2)*exp(-x**2/2)/(2*sqrt(pi)) # Normal distribution
>>> X = ContinuousRV(x, pdf)
>>> E(X)
0
>>> P(X>0)
1/2

Joint Types

sympy.stats.JointRV(symbol, pdf, _set=None)[source]

Create a Joint Random Variable where each of its component is conitinuous, given the following:

– a symbol – a PDF in terms of indexed symbols of the symbol given as the first argument

NOTE: As of now, the set for each component for a \(JointRV\) is equal to the set of all integers, which can not be changed.

Returns

RandomSymbol

Examples

>>> from sympy import exp, pi, Indexed, S
>>> from sympy.stats import density, JointRV
>>> x1, x2 = (Indexed('x', i) for i in (1, 2))
>>> pdf = exp(-x1**2/2 + x1 - x2**2/2 - S(1)/2)/(2*pi)
>>> N1 = JointRV('x', pdf) #Multivariate Normal distribution
>>> density(N1)(1, 2)
exp(-2)/(2*pi)
sympy.stats.marginal_distribution(rv, *indices)[source]

Marginal distribution function of a joint random variable.

Parameters

rv: A random variable with a joint probability distribution.

indices: component indices or the indexed random symbol

for whom the joint distribution is to be calculated

Returns

A Lambda expression in \(sym\).

Examples

>>> from sympy.stats import MultivariateNormal, marginal_distribution
>>> m = MultivariateNormal('X', [1, 2], [[2, 1], [1, 2]])
>>> marginal_distribution(m, m[0])(1)
1/(2*sqrt(pi))
sympy.stats.MultivariateNormal(name, mu, sigma)[source]

Creates a continuous random variable with Multivariate Normal Distribution.

The density of the multivariate normal distribution can be found at [1].

Parameters

mu : List representing the mean or the mean vector

sigma : Positive definite square matrix

Represents covariance Matrix

Returns

RandomSymbol

Examples

>>> from sympy.stats import MultivariateNormal, density, marginal_distribution
>>> from sympy import symbols
>>> X = MultivariateNormal('X', [3, 4], [[2, 1], [1, 2]])
>>> y, z = symbols('y z')
>>> density(X)(y, z)
sqrt(3)*exp((3/2 - y/2)*(2*y/3 - z/3 - 2/3) + (2 - z/2)*(-y/3 + 2*z/3 - 5/3))/(6*pi)
>>> density(X)(1, 2)
sqrt(3)*exp(-4/3)/(6*pi)
>>> marginal_distribution(X, X[1])(y)
exp((2 - y/2)*(y/2 - 2))/(2*sqrt(pi))
>>> marginal_distribution(X, X[0])(y)
exp((3/2 - y/2)*(y/2 - 3/2))/(2*sqrt(pi))

References

R812

https://en.wikipedia.org/wiki/Multivariate_normal_distribution

sympy.stats.MultivariateLaplace(name, mu, sigma)[source]

Creates a continuous random variable with Multivariate Laplace Distribution.

The density of the multivariate Laplace distribution can be found at [1].

Parameters

mu : List representing the mean or the mean vector

sigma : Positive definite square matrix

Represents covariance Matrix

Returns

RandomSymbol

Examples

>>> from sympy.stats import MultivariateLaplace, density
>>> from sympy import symbols
>>> y, z = symbols('y z')
>>> X = MultivariateLaplace('X', [2, 4], [[3, 1], [1, 3]])
>>> density(X)(y, z)
sqrt(2)*exp(y/4 + 5*z/4)*besselk(0, sqrt(15*y*(3*y/8 - z/8)/2 + 15*z*(-y/8 + 3*z/8)/2))/(4*pi)
>>> density(X)(1, 2)
sqrt(2)*exp(11/4)*besselk(0, sqrt(165)/4)/(4*pi)

References

R813

https://en.wikipedia.org/wiki/Multivariate_Laplace_distribution

sympy.stats.GeneralizedMultivariateLogGamma(syms, delta, v, lamda, mu)[source]

Creates a joint random variable with generalized multivariate log gamma distribution.

The joint pdf can be found at [1].

Parameters

syms: list/tuple/set of symbols for identifying each component

delta: A constant in range [0, 1]

v: Positive real number

lamda: List of positive real numbers

mu: List of positive real numbers

Returns

RandomSymbol

Examples

>>> from sympy.stats import density
>>> from sympy.stats.joint_rv_types import GeneralizedMultivariateLogGamma
>>> from sympy import symbols, S
>>> v = 1
>>> l, mu = [1, 1, 1], [1, 1, 1]
>>> d = S.Half
>>> y = symbols('y_1:4', positive=True)
>>> Gd = GeneralizedMultivariateLogGamma('G', d, v, l, mu)
>>> density(Gd)(y[0], y[1], y[2])
Sum(2**(-n)*exp((n + 1)*(y_1 + y_2 + y_3) - exp(y_1) - exp(y_2) -
exp(y_3))/gamma(n + 1)**3, (n, 0, oo))/2

Note

If the GeneralizedMultivariateLogGamma is too long to type use, \(from sympy.stats.joint_rv_types import GeneralizedMultivariateLogGamma as GMVLG\) If you want to pass the matrix omega instead of the constant delta, then use, GeneralizedMultivariateLogGammaOmega.

References

R814

https://en.wikipedia.org/wiki/Generalized_multivariate_log-gamma_distribution

R815

https://www.researchgate.net/publication/234137346_On_a_multivariate_log-gamma_distribution_and_the_use_of_the_distribution_in_the_Bayesian_analysis

sympy.stats.GeneralizedMultivariateLogGammaOmega(syms, omega, v, lamda, mu)[source]

Extends GeneralizedMultivariateLogGamma.

Parameters

syms: list/tuple/set of symbols

For identifying each component

omega: A square matrix

Every element of square matrix must be absolute value of square root of correlation coefficient

v: Positive real number

lamda: List of positive real numbers

mu: List of positive real numbers

Returns

RandomSymbol

Examples

>>> from sympy.stats import density
>>> from sympy.stats.joint_rv_types import GeneralizedMultivariateLogGammaOmega
>>> from sympy import Matrix, symbols, S
>>> omega = Matrix([[1, S.Half, S.Half], [S.Half, 1, S.Half], [S.Half, S.Half, 1]])
>>> v = 1
>>> l, mu = [1, 1, 1], [1, 1, 1]
>>> G = GeneralizedMultivariateLogGammaOmega('G', omega, v, l, mu)
>>> y = symbols('y_1:4', positive=True)
>>> density(G)(y[0], y[1], y[2])
sqrt(2)*Sum((1 - sqrt(2)/2)**n*exp((n + 1)*(y_1 + y_2 + y_3) - exp(y_1) -
exp(y_2) - exp(y_3))/gamma(n + 1)**3, (n, 0, oo))/2

Notes

If the GeneralizedMultivariateLogGammaOmega is too long to type use, \(from sympy.stats.joint_rv_types import GeneralizedMultivariateLogGammaOmega as GMVLGO\)

References

R816

https://en.wikipedia.org/wiki/Generalized_multivariate_log-gamma_distribution

R817

https://www.researchgate.net/publication/234137346_On_a_multivariate_log-gamma_distribution_and_the_use_of_the_distribution_in_the_Bayesian_analysis

sympy.stats.Multinomial(syms, n, *p)[source]

Creates a discrete random variable with Multinomial Distribution.

The density of the said distribution can be found at [1].

Parameters

n: Positive integer

Represents number of trials

p: List of event probabilites

Must be in the range of [0, 1]

Returns

RandomSymbol

Examples

>>> from sympy.stats import density,  Multinomial, marginal_distribution
>>> from sympy import symbols
>>> x1, x2, x3 = symbols('x1, x2, x3', nonnegative=True, integer=True)
>>> p1, p2, p3 = symbols('p1, p2, p3', positive=True)
>>> M = Multinomial('M', 3, p1, p2, p3)
>>> density(M)(x1, x2, x3)
Piecewise((6*p1**x1*p2**x2*p3**x3/(factorial(x1)*factorial(x2)*factorial(x3)),
Eq(x1 + x2 + x3, 3)), (0, True))
>>> marginal_distribution(M, M[0])(x1).subs(x1, 1)
3*p1*p2**2 + 6*p1*p2*p3 + 3*p1*p3**2

References

R818

https://en.wikipedia.org/wiki/Multinomial_distribution

R819

http://mathworld.wolfram.com/MultinomialDistribution.html

sympy.stats.MultivariateBeta(syms, *alpha)[source]

Creates a continuous random variable with Dirichlet/Multivariate Beta Distribution.

The density of the dirichlet distribution can be found at [1].

Parameters

alpha: Positive real numbers

Signifies concentration numbers.

Returns

RandomSymbol

Examples

>>> from sympy.stats import density, MultivariateBeta, marginal_distribution
>>> from sympy import Symbol
>>> a1 = Symbol('a1', positive=True)
>>> a2 = Symbol('a2', positive=True)
>>> B = MultivariateBeta('B', [a1, a2])
>>> C = MultivariateBeta('C', a1, a2)
>>> x = Symbol('x')
>>> y = Symbol('y')
>>> density(B)(x, y)
x**(a1 - 1)*y**(a2 - 1)*gamma(a1 + a2)/(gamma(a1)*gamma(a2))
>>> marginal_distribution(C, C[0])(x)
x**(a1 - 1)*gamma(a1 + a2)/(a2*gamma(a1)*gamma(a2))

References

R820

https://en.wikipedia.org/wiki/Dirichlet_distribution

R821

http://mathworld.wolfram.com/DirichletDistribution.html

sympy.stats.MultivariateEwens(syms, n, theta)[source]

Creates a discrete random variable with Multivariate Ewens Distribution.

The density of the said distribution can be found at [1].

Parameters

n: Positive integer

Size of the sample or the integer whose partitions are considered

theta: Positive real number

Denotes Mutation rate

Returns

RandomSymbol

Examples

>>> from sympy.stats import density, marginal_distribution, MultivariateEwens
>>> from sympy import Symbol
>>> a1 = Symbol('a1', positive=True)
>>> a2 = Symbol('a2', positive=True)
>>> ed = MultivariateEwens('E', 2, 1)
>>> density(ed)(a1, a2)
Piecewise((2**(-a2)/(factorial(a1)*factorial(a2)), Eq(a1 + 2*a2, 2)), (0, True))
>>> marginal_distribution(ed, ed[0])(a1)
Piecewise((1/factorial(a1), Eq(a1, 2)), (0, True))

References

R822

https://en.wikipedia.org/wiki/Ewens%27s_sampling_formula

R823

http://www.stat.rutgers.edu/home/hcrane/Papers/STS529.pdf

sympy.stats.MultivariateT(syms, mu, sigma, v)[source]

Creates a joint random variable with multivariate T-distribution.

Parameters

syms: A symbol/str

For identifying the random variable.

mu: A list/matrix

Representing the location vector

sigma: The shape matrix for the distribution

Returns

RandomSymbol

Examples

>>> from sympy.stats import density, MultivariateT
>>> from sympy import Symbol
>>> x = Symbol("x")
>>> X = MultivariateT("x", [1, 1], [[1, 0], [0, 1]], 2)
>>> density(X)(1, 2)
2/(9*pi)
sympy.stats.NegativeMultinomial(syms, k0, *p)[source]

Creates a discrete random variable with Negative Multinomial Distribution.

The density of the said distribution can be found at [1].

Parameters

k0: positive integer

Represents number of failures before the experiment is stopped

p: List of event probabilites

Must be in the range of [0, 1]

Returns

RandomSymbol

Examples

>>> from sympy.stats import density, NegativeMultinomial, marginal_distribution
>>> from sympy import symbols
>>> x1, x2, x3 = symbols('x1, x2, x3', nonnegative=True, integer=True)
>>> p1, p2, p3 = symbols('p1, p2, p3', positive=True)
>>> N = NegativeMultinomial('M', 3, p1, p2, p3)
>>> N_c = NegativeMultinomial('M', 3, 0.1, 0.1, 0.1)
>>> density(N)(x1, x2, x3)
p1**x1*p2**x2*p3**x3*(-p1 - p2 - p3 + 1)**3*gamma(x1 + x2 +
x3 + 3)/(2*factorial(x1)*factorial(x2)*factorial(x3))
>>> marginal_distribution(N_c, N_c[0])(1).evalf().round(2)
0.25

References

R824

https://en.wikipedia.org/wiki/Negative_multinomial_distribution

R825

http://mathworld.wolfram.com/NegativeBinomialDistribution.html

sympy.stats.NormalGamma(sym, mu, lamda, alpha, beta)[source]

Creates a bivariate joint random variable with multivariate Normal gamma distribution.

Parameters

sym: A symbol/str

For identifying the random variable.

mu: A real number

The mean of the normal distribution

lamda: A positive integer

Parameter of joint distribution

alpha: A positive integer

Parameter of joint distribution

beta: A positive integer

Parameter of joint distribution

Returns

RandomSymbol

Examples

>>> from sympy.stats import density, NormalGamma
>>> from sympy import symbols
>>> X = NormalGamma('x', 0, 1, 2, 3)
>>> y, z = symbols('y z')
>>> density(X)(y, z)
9*sqrt(2)*z**(3/2)*exp(-3*z)*exp(-y**2*z/2)/(2*sqrt(pi))

References

R826

https://en.wikipedia.org/wiki/Normal-gamma_distribution

Stochastic Processes

class sympy.stats.DiscreteMarkovChain(sym: sympy.core.basic.Basic, state_space: Optional[Union[str, sympy.core.symbol.Symbol]] = None, trans_probs: Optional[Sequence] = None)[source]

Represents a finite discrete time-homogeneous Markov chain.

This type of Markov Chain can be uniquely characterised by its (ordered) state space and its one-step transition probability matrix.

Parameters

sym:

The name given to the Markov Chain

state_space:

Optional, by default, Range(n)

trans_probs:

Optional, by default, MatrixSymbol(‘_T’, n, n)

Examples

>>> from sympy.stats import DiscreteMarkovChain, TransitionMatrixOf, P, E
>>> from sympy import Matrix, MatrixSymbol, Eq, symbols
>>> T = Matrix([[0.5, 0.2, 0.3],[0.2, 0.5, 0.3],[0.2, 0.3, 0.5]])
>>> Y = DiscreteMarkovChain("Y", [0, 1, 2], T)
>>> YS = DiscreteMarkovChain("Y")
>>> Y.state_space
FiniteSet(0, 1, 2)
>>> Y.transition_probabilities
Matrix([
[0.5, 0.2, 0.3],
[0.2, 0.5, 0.3],
[0.2, 0.3, 0.5]])
>>> TS = MatrixSymbol('T', 3, 3)
>>> P(Eq(YS[3], 2), Eq(YS[1], 1) & TransitionMatrixOf(YS, TS))
T[0, 2]*T[1, 0] + T[1, 1]*T[1, 2] + T[1, 2]*T[2, 2]
>>> P(Eq(Y[3], 2), Eq(Y[1], 1)).round(2)
0.36

Probabilities will be calculated based on indexes rather than state names. For example, with the Sunny-Cloudy-Rainy model with string state names:

>>> from sympy.core.symbol import Str
>>> Y = DiscreteMarkovChain("Y", [Str('Sunny'), Str('Cloudy'), Str('Rainy')], T)
>>> P(Eq(Y[3], 2), Eq(Y[1], 1)).round(2)
0.36

This gives the same answer as the [0, 1, 2] state space. Currently, there is no support for state names within probability and expectation statements. Here is a work-around using Str:

>>> P(Eq(Str('Rainy'), Y[3]), Eq(Y[1], Str('Cloudy'))).round(2)
0.36

Symbol state names can also be used:

>>> sunny, cloudy, rainy = symbols('Sunny, Cloudy, Rainy')
>>> Y = DiscreteMarkovChain("Y", [sunny, cloudy, rainy], T)
>>> P(Eq(Y[3], rainy), Eq(Y[1], cloudy)).round(2)
0.36

Expectations will be calculated as follows:

>>> E(Y[3], Eq(Y[1], cloudy))
0.38*Cloudy + 0.36*Rainy + 0.26*Sunny

There is limited support for arbitrarily sized states:

>>> n = symbols('n', nonnegative=True, integer=True)
>>> T = MatrixSymbol('T', n, n)
>>> Y = DiscreteMarkovChain("Y", trans_probs=T)
>>> Y.state_space
Range(0, n, 1)

References

R827

https://en.wikipedia.org/wiki/Markov_chain#Discrete-time_Markov_chain

R828

https://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/Chapter11.pdf

class sympy.stats.ContinuousMarkovChain(sym, state_space=None, gen_mat=None)[source]

Represents continuous time Markov chain.

Parameters

sym: Symbol/str

state_space: Set

Optional, by default, S.Reals

gen_mat: Matrix/ImmutableMatrix/MatrixSymbol

Optional, by default, None

Examples

>>> from sympy.stats import ContinuousMarkovChain
>>> from sympy import Matrix, S
>>> G = Matrix([[-S(1), S(1)], [S(1), -S(1)]])
>>> C = ContinuousMarkovChain('C', state_space=[0, 1], gen_mat=G)
>>> C.limiting_distribution()
Matrix([[1/2, 1/2]])

References

R829

https://en.wikipedia.org/wiki/Markov_chain#Continuous-time_Markov_chain

R830

http://u.math.biu.ac.il/~amirgi/CTMCnotes.pdf

class sympy.stats.BernoulliProcess(sym, p, success=1, failure=0)[source]

The Bernoulli process consists of repeated independent Bernoulli process trials with the same parameter \(p\). It’s assumed that the probability \(p\) applies to every trial and that the outcomes of each trial are independent of all the rest. Therefore Bernoulli Processs is Discrete State and Discrete Time Stochastic Process.

Parameters

sym: Symbol/str

success: Integer/str

The event which is considered to be success, by default is 1.

failure: Integer/str

The event which is considered to be failure, by default is 0.

p: Real Number between 0 and 1

Represents the probability of getting success.

Examples

>>> from sympy.stats import BernoulliProcess, P, E
>>> from sympy import Eq, Gt
>>> B = BernoulliProcess("B", p=0.7, success=1, failure=0)
>>> B.state_space
FiniteSet(0, 1)
>>> (B.p).round(2)
0.70
>>> B.success
1
>>> B.failure
0
>>> X = B[1] + B[2] + B[3]
>>> P(Eq(X, 0)).round(2)
0.03
>>> P(Eq(X, 2)).round(2)
0.44
>>> P(Eq(X, 4)).round(2)
0
>>> P(Gt(X, 1)).round(2)
0.78
>>> P(Eq(B[1], 0) & Eq(B[2], 1) & Eq(B[3], 0) & Eq(B[4], 1)).round(2)
0.04
>>> B.joint_distribution(B[1], B[2])
JointDistributionHandmade(Lambda((B[1], B[2]), Piecewise((0.7, Eq(B[1], 1)),
(0.3, Eq(B[1], 0)), (0, True))*Piecewise((0.7, Eq(B[2], 1)), (0.3, Eq(B[2], 0)),
(0, True))))
>>> E(2*B[1] + B[2]).round(2)
2.10
>>> P(B[1] < 1).round(2)
0.30

References

R831

https://en.wikipedia.org/wiki/Bernoulli_process

R832

https://mathcs.clarku.edu/~djoyce/ma217/bernoulli.pdf

class sympy.stats.PoissonProcess(sym, lamda)[source]

The Poisson process is a counting process. It is usually used in scenarios where we are counting the occurrences of certain events that appear to happen at a certain rate, but completely at random.

Parameters

sym: Symbol/str

lamda: Positive number

Rate of the process, lamda > 0

Examples

>>> from sympy.stats import PoissonProcess, P, E
>>> from sympy import symbols, Eq, Ne, Contains, Interval
>>> X = PoissonProcess("X", lamda=3)
>>> X.state_space
Naturals0
>>> X.lamda
3
>>> t1, t2 = symbols('t1 t2', positive=True)
>>> P(X(t1) < 4)
(9*t1**3/2 + 9*t1**2/2 + 3*t1 + 1)*exp(-3*t1)
>>> P(Eq(X(t1), 2) | Ne(X(t1), 4), Contains(t1, Interval.Ropen(2, 4)))
1 - 36*exp(-6)
>>> P(Eq(X(t1), 2) & Eq(X(t2), 3), Contains(t1, Interval.Lopen(0, 2))
... & Contains(t2, Interval.Lopen(2, 4)))
648*exp(-12)
>>> E(X(t1))
3*t1
>>> E(X(t1)**2 + 2*X(t2),  Contains(t1, Interval.Lopen(0, 1))
... & Contains(t2, Interval.Lopen(1, 2)))
18
>>> P(X(3) < 1, Eq(X(1), 0))
exp(-6)
>>> P(Eq(X(4), 3), Eq(X(2), 3))
exp(-6)
>>> P(X(2) <= 3, X(1) > 1)
5*exp(-3)

Merging two Poisson Processes

>>> Y = PoissonProcess("Y", lamda=4)
>>> Z = X + Y
>>> Z.lamda
7

Splitting a Poisson Process into two independent Poisson Processes

>>> N, M = Z.split(l1=2, l2=5)
>>> N.lamda, M.lamda
(2, 5)

References

R833

https://www.probabilitycourse.com/chapter11/11_0_0_intro.php

R834

https://en.wikipedia.org/wiki/Poisson_point_process

class sympy.stats.WienerProcess(sym)[source]

The Wiener process is a real valued continuous-time stochastic process. In physics it is used to study Brownian motion and therefore also known as Brownian Motion.

Parameters

sym: Symbol/str

Examples

>>> from sympy.stats import WienerProcess, P, E
>>> from sympy import symbols, Contains, Interval
>>> X = WienerProcess("X")
>>> X.state_space
Reals
>>> t1, t2 = symbols('t1 t2', positive=True)
>>> P(X(t1) < 7).simplify()
erf(7*sqrt(2)/(2*sqrt(t1)))/2 + 1/2
>>> P((X(t1) > 2) | (X(t1) < 4), Contains(t1, Interval.Ropen(2, 4))).simplify()
-erf(1)/2 + erf(2)/2 + 1
>>> E(X(t1))
0
>>> E(X(t1) + 2*X(t2),  Contains(t1, Interval.Lopen(0, 1))
... & Contains(t2, Interval.Lopen(1, 2)))
0

References

R835

https://www.probabilitycourse.com/chapter11/11_4_0_brownian_motion_wiener_process.php

R836

https://en.wikipedia.org/wiki/Wiener_process

class sympy.stats.GammaProcess(sym, lamda, gamma)[source]

A Gamma process is a random process with independent gamma distributed increments. It is a pure-jump increasing Levy process.

Parameters

sym: Symbol/str

lamda: Positive number

Jump size of the process, lamda > 0

gamma: Positive number

Rate of jump arrivals, gamma > 0

Examples

>>> from sympy.stats import GammaProcess, E, P, variance
>>> from sympy import symbols, Contains, Interval, Not
>>> t, d, x, l, g = symbols('t d x l g', positive=True)
>>> X = GammaProcess("X", l, g)
>>> E(X(t))
g*t/l
>>> variance(X(t)).simplify()
g*t/l**2
>>> X = GammaProcess('X', 1, 2)
>>> P(X(t) < 1).simplify()
lowergamma(2*t, 1)/gamma(2*t)
>>> P(Not((X(t) < 5) & (X(d) > 3)), Contains(t, Interval.Ropen(2, 4)) &
... Contains(d, Interval.Lopen(7, 8))).simplify()
-4*exp(-3) + 472*exp(-8)/3 + 1
>>> E(X(2) + x*E(X(5)))
10*x + 4

References

R837

https://en.wikipedia.org/wiki/Gamma_process

Matrix Distributions

sympy.stats.MatrixGamma(symbol, alpha, beta, scale_matrix)[source]

Creates a random variable with Matrix Gamma Distribution.

The density of the said distribution can be found at [1].

Parameters

alpha: Positive Real number

Shape Parameter

beta: Positive Real number

Scale Parameter

scale_matrix: Positive definite real square matrix

Scale Matrix

Returns

RandomSymbol

Examples

>>> from sympy.stats import density, MatrixGamma
>>> from sympy import MatrixSymbol, symbols
>>> a, b = symbols('a b', positive=True)
>>> M = MatrixGamma('M', a, b, [[2, 1], [1, 2]])
>>> X = MatrixSymbol('X', 2, 2)
>>> density(M)(X).doit()
3**(-a)*b**(-2*a)*exp(Trace(Matrix([
[-2/3,  1/3],
[ 1/3, -2/3]])*X)/b)*Determinant(X)**(a - 3/2)/(sqrt(pi)*gamma(a)*gamma(a - 1/2))
>>> density(M)([[1, 0], [0, 1]]).doit()
3**(-a)*b**(-2*a)*exp(-4/(3*b))/(sqrt(pi)*gamma(a)*gamma(a - 1/2))

References

R838

https://en.wikipedia.org/wiki/Matrix_gamma_distribution

sympy.stats.Wishart(symbol, n, scale_matrix)[source]

Creates a random variable with Wishart Distribution.

The density of the said distribution can be found at [1].

Parameters

n: Positive Real number

Represents degrees of freedom

scale_matrix: Positive definite real square matrix

Scale Matrix

Returns

RandomSymbol

Examples

>>> from sympy.stats import density, Wishart
>>> from sympy import MatrixSymbol, symbols
>>> n = symbols('n', positive=True)
>>> W = Wishart('W', n, [[2, 1], [1, 2]])
>>> X = MatrixSymbol('X', 2, 2)
>>> density(W)(X).doit()
2**(-n)*3**(-n/2)*exp(Trace(Matrix([
[-1/3,  1/6],
[ 1/6, -1/3]])*X))*Determinant(X)**(n/2 - 3/2)/(sqrt(pi)*gamma(n/2)*gamma(n/2 - 1/2))
>>> density(W)([[1, 0], [0, 1]]).doit()
2**(-n)*3**(-n/2)*exp(-2/3)/(sqrt(pi)*gamma(n/2)*gamma(n/2 - 1/2))

References

R839

https://en.wikipedia.org/wiki/Wishart_distribution

sympy.stats.MatrixNormal(symbol, location_matrix, scale_matrix_1, scale_matrix_2)[source]

Creates a random variable with Matrix Normal Distribution.

The density of the said distribution can be found at [1].

Parameters

location_matrix: Real ``n x p`` matrix

Represents degrees of freedom

scale_matrix_1: Positive definite matrix

Scale Matrix of shape n x n

scale_matrix_2: Positive definite matrix

Scale Matrix of shape p x p

Returns

RandomSymbol

Examples

>>> from sympy import MatrixSymbol
>>> from sympy.stats import density, MatrixNormal
>>> M = MatrixNormal('M', [[1, 2]], [1], [[1, 0], [0, 1]])
>>> X = MatrixSymbol('X', 1, 2)
>>> density(M)(X).doit()
2*exp(-Trace((Matrix([
[-1],
[-2]]) + X.T)*(Matrix([[-1, -2]]) + X))/2)/pi
>>> density(M)([[3, 4]]).doit()
2*exp(-4)/pi

References

R840

https://en.wikipedia.org/wiki/Matrix_normal_distribution

Compound Distribution

class sympy.stats.compound_rv.CompoundDistribution(dist)[source]

Class for Compound Distributions.

Parameters

dist : Distribution

Distribution must contain a random parameter

Examples

>>> from sympy.stats.compound_rv import CompoundDistribution
>>> from sympy.stats.crv_types import NormalDistribution
>>> from sympy.stats import Normal
>>> from sympy.abc import x
>>> X = Normal('X', 2, 4)
>>> N = NormalDistribution(X, 4)
>>> C = CompoundDistribution(N)
>>> C.set
Interval(-oo, oo)
>>> C.pdf(x, evaluate=True).simplify()
exp(-x**2/64 + x/16 - 1/16)/(8*sqrt(pi))

References

R841

https://en.wikipedia.org/wiki/Compound_probability_distribution

Interface

sympy.stats.P(condition, given_condition=None, numsamples=None, evaluate=True, **kwargs)[source]

Probability that a condition is true, optionally given a second condition

Parameters

condition : Combination of Relationals containing RandomSymbols

The condition of which you want to compute the probability

given_condition : Combination of Relationals containing RandomSymbols

A conditional expression. P(X > 1, X > 0) is expectation of X > 1 given X > 0

numsamples : int

Enables sampling and approximates the probability with this many samples

evaluate : Bool (defaults to True)

In case of continuous systems return unevaluated integral

Examples

>>> from sympy.stats import P, Die
>>> from sympy import Eq
>>> X, Y = Die('X', 6), Die('Y', 6)
>>> P(X > 3)
1/2
>>> P(Eq(X, 5), X > 2) # Probability that X == 5 given that X > 2
1/4
>>> P(X > Y)
5/12
class sympy.stats.Probability(prob, condition=None, **kwargs)[source]

Symbolic expression for the probability.

Examples

>>> from sympy.stats import Probability, Normal
>>> from sympy import Integral
>>> X = Normal("X", 0, 1)
>>> prob = Probability(X > 1)
>>> prob
Probability(X > 1)

Integral representation:

>>> prob.rewrite(Integral)
Integral(sqrt(2)*exp(-_z**2/2)/(2*sqrt(pi)), (_z, 1, oo))

Evaluation of the integral:

>>> prob.evaluate_integral()
sqrt(2)*(-sqrt(2)*sqrt(pi)*erf(sqrt(2)/2) + sqrt(2)*sqrt(pi))/(4*sqrt(pi))
sympy.stats.E(expr, condition=None, numsamples=None, evaluate=True, **kwargs)[source]

Returns the expected value of a random expression

Parameters

expr : Expr containing RandomSymbols

The expression of which you want to compute the expectation value

given : Expr containing RandomSymbols

A conditional expression. E(X, X>0) is expectation of X given X > 0

numsamples : int

Enables sampling and approximates the expectation with this many samples

evalf : Bool (defaults to True)

If sampling return a number rather than a complex expression

evaluate : Bool (defaults to True)

In case of continuous systems return unevaluated integral

Examples

>>> from sympy.stats import E, Die
>>> X = Die('X', 6)
>>> E(X)
7/2
>>> E(2*X + 1)
8
>>> E(X, X > 3) # Expectation of X given that it is above 3
5
class sympy.stats.Expectation(expr, condition=None, **kwargs)[source]

Symbolic expression for the expectation.

Examples

>>> from sympy.stats import Expectation, Normal, Probability, Poisson
>>> from sympy import symbols, Integral, Sum
>>> mu = symbols("mu")
>>> sigma = symbols("sigma", positive=True)
>>> X = Normal("X", mu, sigma)
>>> Expectation(X)
Expectation(X)
>>> Expectation(X).evaluate_integral().simplify()
mu

To get the integral expression of the expectation:

>>> Expectation(X).rewrite(Integral)
Integral(sqrt(2)*X*exp(-(X - mu)**2/(2*sigma**2))/(2*sqrt(pi)*sigma), (X, -oo, oo))

The same integral expression, in more abstract terms:

>>> Expectation(X).rewrite(Probability)
Integral(x*Probability(Eq(X, x)), (x, -oo, oo))

To get the Summation expression of the expectation for discrete random variables:

>>> lamda = symbols('lamda', positive=True)
>>> Z = Poisson('Z', lamda)
>>> Expectation(Z).rewrite(Sum)
Sum(Z*lamda**Z*exp(-lamda)/factorial(Z), (Z, 0, oo))

This class is aware of some properties of the expectation:

>>> from sympy.abc import a
>>> Expectation(a*X)
Expectation(a*X)
>>> Y = Normal("Y", 1, 2)
>>> Expectation(X + Y)
Expectation(X + Y)

To expand the Expectation into its expression, use expand():

>>> Expectation(X + Y).expand()
Expectation(X) + Expectation(Y)
>>> Expectation(a*X + Y).expand()
a*Expectation(X) + Expectation(Y)
>>> Expectation(a*X + Y)
Expectation(a*X + Y)
>>> Expectation((X + Y)*(X - Y)).expand()
Expectation(X**2) - Expectation(Y**2)

To evaluate the Expectation, use doit():

>>> Expectation(X + Y).doit()
mu + 1
>>> Expectation(X + Expectation(Y + Expectation(2*X))).doit()
3*mu + 1

To prevent evaluating nested Expectation, use doit(deep=False)

>>> Expectation(X + Expectation(Y)).doit(deep=False)
mu + Expectation(Expectation(Y))
>>> Expectation(X + Expectation(Y + Expectation(2*X))).doit(deep=False)
mu + Expectation(Expectation(Y + Expectation(2*X)))
sympy.stats.density(expr, condition=None, evaluate=True, numsamples=None, **kwargs)[source]

Probability density of a random expression, optionally given a second condition.

This density will take on different forms for different types of probability spaces. Discrete variables produce Dicts. Continuous variables produce Lambdas.

Parameters

expr : Expr containing RandomSymbols

The expression of which you want to compute the density value

condition : Relational containing RandomSymbols

A conditional expression. density(X > 1, X > 0) is density of X > 1 given X > 0

numsamples : int

Enables sampling and approximates the density with this many samples

Examples

>>> from sympy.stats import density, Die, Normal
>>> from sympy import Symbol
>>> x = Symbol('x')
>>> D = Die('D', 6)
>>> X = Normal(x, 0, 1)
>>> density(D).dict
{1: 1/6, 2: 1/6, 3: 1/6, 4: 1/6, 5: 1/6, 6: 1/6}
>>> density(2*D).dict
{2: 1/6, 4: 1/6, 6: 1/6, 8: 1/6, 10: 1/6, 12: 1/6}
>>> density(X)(x)
sqrt(2)*exp(-x**2/2)/(2*sqrt(pi))
sympy.stats.entropy(expr, condition=None, **kwargs)[source]

Calculuates entropy of a probability distribution

Parameters

expression : the random expression whose entropy is to be calculated

condition : optional, to specify conditions on random expression

b: base of the logarithm, optional

By default, it is taken as Euler’s number

Returns

result : Entropy of the expression, a constant

Examples

>>> from sympy.stats import Normal, Die, entropy
>>> X = Normal('X', 0, 1)
>>> entropy(X)
log(2)/2 + 1/2 + log(pi)/2
>>> D = Die('D', 4)
>>> entropy(D)
log(4)

References

R842

https://en.wikipedia.org/wiki/Entropy_(information_theory)

R843

https://www.crmarsh.com/static/pdf/Charles_Marsh_Continuous_Entropy.pdf

R844

http://www.math.uconn.edu/~kconrad/blurbs/analysis/entropypost.pdf

sympy.stats.given(expr, condition=None, **kwargs)[source]

Conditional Random Expression From a random expression and a condition on that expression creates a new probability space from the condition and returns the same expression on that conditional probability space.

Examples

>>> from sympy.stats import given, density, Die
>>> X = Die('X', 6)
>>> Y = given(X, X > 3)
>>> density(Y).dict
{4: 1/3, 5: 1/3, 6: 1/3}

Following convention, if the condition is a random symbol then that symbol is considered fixed.

>>> from sympy.stats import Normal
>>> from sympy import pprint
>>> from sympy.abc import z
>>> X = Normal('X', 0, 1)
>>> Y = Normal('Y', 0, 1)
>>> pprint(density(X + Y, Y)(z), use_unicode=False)
                2
       -(-Y + z)
       -----------
  ___       2
\/ 2 *e
------------------
         ____
     2*\/ pi
sympy.stats.where(condition, given_condition=None, **kwargs)[source]

Returns the domain where a condition is True.

Examples

>>> from sympy.stats import where, Die, Normal
>>> from sympy import And
>>> D1, D2 = Die('a', 6), Die('b', 6)
>>> a, b = D1.symbol, D2.symbol
>>> X = Normal('x', 0, 1)
>>> where(X**2<1)
Domain: (-1 < x) & (x < 1)
>>> where(X**2<1).set
Interval.open(-1, 1)
>>> where(And(D1<=D2 , D2<3))
Domain: (Eq(a, 1) & Eq(b, 1)) | (Eq(a, 1) & Eq(b, 2)) | (Eq(a, 2) & Eq(b, 2))
sympy.stats.variance(X, condition=None, **kwargs)[source]

Variance of a random expression

\[variance(X) = E((X-E(X))^{2})\]

Examples

>>> from sympy.stats import Die, Bernoulli, variance
>>> from sympy import simplify, Symbol
>>> X = Die('X', 6)
>>> p = Symbol('p')
>>> B = Bernoulli('B', p, 1, 0)
>>> variance(2*X)
35/3
>>> simplify(variance(B))
p*(1 - p)
class sympy.stats.Variance(arg, condition=None, **kwargs)[source]

Symbolic expression for the variance.

Examples

>>> from sympy import symbols, Integral
>>> from sympy.stats import Normal, Expectation, Variance, Probability
>>> mu = symbols("mu", positive=True)
>>> sigma = symbols("sigma", positive=True)
>>> X = Normal("X", mu, sigma)
>>> Variance(X)
Variance(X)
>>> Variance(X).evaluate_integral()
sigma**2

Integral representation of the underlying calculations:

>>> Variance(X).rewrite(Integral)
Integral(sqrt(2)*(X - Integral(sqrt(2)*X*exp(-(X - mu)**2/(2*sigma**2))/(2*sqrt(pi)*sigma), (X, -oo, oo)))**2*exp(-(X - mu)**2/(2*sigma**2))/(2*sqrt(pi)*sigma), (X, -oo, oo))

Integral representation, without expanding the PDF:

>>> Variance(X).rewrite(Probability)
-Integral(x*Probability(Eq(X, x)), (x, -oo, oo))**2 + Integral(x**2*Probability(Eq(X, x)), (x, -oo, oo))

Rewrite the variance in terms of the expectation

>>> Variance(X).rewrite(Expectation)
-Expectation(X)**2 + Expectation(X**2)

Some transformations based on the properties of the variance may happen:

>>> from sympy.abc import a
>>> Y = Normal("Y", 0, 1)
>>> Variance(a*X)
Variance(a*X)

To expand the variance in its expression, use expand():

>>> Variance(a*X).expand()
a**2*Variance(X)
>>> Variance(X + Y)
Variance(X + Y)
>>> Variance(X + Y).expand()
2*Covariance(X, Y) + Variance(X) + Variance(Y)
sympy.stats.covariance(X, Y, condition=None, **kwargs)[source]

Covariance of two random expressions

The expectation that the two variables will rise and fall together

\[covariance(X,Y) = E((X-E(X)) (Y-E(Y)))\]

Examples

>>> from sympy.stats import Exponential, covariance
>>> from sympy import Symbol
>>> rate = Symbol('lambda', positive=True, real=True, finite=True)
>>> X = Exponential('X', rate)
>>> Y = Exponential('Y', rate)
>>> covariance(X, X)
lambda**(-2)
>>> covariance(X, Y)
0
>>> covariance(X, Y + rate*X)
1/lambda
class sympy.stats.Covariance(arg1, arg2, condition=None, **kwargs)[source]

Symbolic expression for the covariance.

Examples

>>> from sympy.stats import Covariance
>>> from sympy.stats import Normal
>>> X = Normal("X", 3, 2)
>>> Y = Normal("Y", 0, 1)
>>> Z = Normal("Z", 0, 1)
>>> W = Normal("W", 0, 1)
>>> cexpr = Covariance(X, Y)
>>> cexpr
Covariance(X, Y)

Evaluate the covariance, \(X\) and \(Y\) are independent, therefore zero is the result:

>>> cexpr.evaluate_integral()
0

Rewrite the covariance expression in terms of expectations:

>>> from sympy.stats import Expectation
>>> cexpr.rewrite(Expectation)
Expectation(X*Y) - Expectation(X)*Expectation(Y)

In order to expand the argument, use expand():

>>> from sympy.abc import a, b, c, d
>>> Covariance(a*X + b*Y, c*Z + d*W)
Covariance(a*X + b*Y, c*Z + d*W)
>>> Covariance(a*X + b*Y, c*Z + d*W).expand()
a*c*Covariance(X, Z) + a*d*Covariance(W, X) + b*c*Covariance(Y, Z) + b*d*Covariance(W, Y)

This class is aware of some properties of the covariance:

>>> Covariance(X, X).expand()
Variance(X)
>>> Covariance(a*X, b*Y).expand()
a*b*Covariance(X, Y)
sympy.stats.coskewness(X, Y, Z, condition=None, **kwargs)[source]

Calculates the co-skewness of three random variables. Mathematically Coskewness is defined as

\[coskewness(X,Y,Z)=\frac{E[(X-E[X]) * (Y-E[Y]) * (Z-E[Z])]} {\sigma_{X}\sigma_{Y}\sigma_{Z}}\]
Parameters

X : RandomSymbol

Random Variable used to calculate coskewness

Y : RandomSymbol

Random Variable used to calculate coskewness

Z : RandomSymbol

Random Variable used to calculate coskewness

condition : Expr containing RandomSymbols

A conditional expression

Returns

coskewness : The coskewness of the three random variables

Examples

>>> from sympy.stats import coskewness, Exponential, skewness
>>> from sympy import symbols
>>> p = symbols('p', positive=True)
>>> X = Exponential('X', p)
>>> Y = Exponential('Y', 2*p)
>>> coskewness(X, Y, Y)
0
>>> coskewness(X, Y + X, Y + 2*X)
16*sqrt(85)/85
>>> coskewness(X + 2*Y, Y + X, Y + 2*X, X > 3)
9*sqrt(170)/85
>>> coskewness(Y, Y, Y) == skewness(Y)
True
>>> coskewness(X, Y + p*X, Y + 2*p*X)
4/(sqrt(1 + 1/(4*p**2))*sqrt(4 + 1/(4*p**2)))

References

R845

https://en.wikipedia.org/wiki/Coskewness

sympy.stats.median(X, evaluate=True, **kwargs)[source]

Calculuates the median of the probability distribution. Mathematically, median of Probability distribution is defined as all those values of \(m\) for which the following condition is satisfied

\[P(X\leq m) \geq \frac{1}{2} \text{ and} \text{ } P(X\geq m)\geq \frac{1}{2}\]
Parameters

X: The random expression whose median is to be calculated.

Returns

The FiniteSet or an Interval which contains the median of the

random expression.

Examples

>>> from sympy.stats import Normal, Die, median
>>> N = Normal('N', 3, 1)
>>> median(N)
FiniteSet(3)
>>> D = Die('D')
>>> median(D)
FiniteSet(3, 4)

References

R846

https://en.wikipedia.org/wiki/Median#Probability_distributions

sympy.stats.std(X, condition=None, **kwargs)[source]

Standard Deviation of a random expression

\[std(X) = \sqrt(E((X-E(X))^{2}))\]

Examples

>>> from sympy.stats import Bernoulli, std
>>> from sympy import Symbol, simplify
>>> p = Symbol('p')
>>> B = Bernoulli('B', p, 1, 0)
>>> simplify(std(B))
sqrt(p*(1 - p))
sympy.stats.sample(expr, condition=None, size=(), library='scipy', numsamples=1, seed=None, **kwargs)[source]

A realization of the random expression

Parameters

expr : Expression of random variables

Expression from which sample is extracted

condition : Expr containing RandomSymbols

A conditional expression

size : int, tuple

Represents size of each sample in numsamples

library : str

  • ‘scipy’ : Sample using scipy

  • ‘numpy’ : Sample using numpy

  • ‘pymc3’ : Sample using PyMC3

Choose any of the available options to sample from as string, by default is ‘scipy’

numsamples : int

Number of samples, each with size as size

seed :

An object to be used as seed by the given external library for sampling \(expr\). Following is the list of possible types of object for the supported libraries,

  • ‘scipy’: int, numpy.random.RandomState, numpy.random.Generator

  • ‘numpy’: int, numpy.random.RandomState, numpy.random.Generator

  • ‘pymc3’: int

Optional, by default None, in which case seed settings related to the given library will be used. No modifications to environment’s global seed settings are done by this argument.

Returns

sample: iterator object

iterator object containing the sample/samples of given expr

Examples

>>> from sympy.stats import Die, sample, Normal, Geometric
>>> X, Y, Z = Die('X', 6), Die('Y', 6), Die('Z', 6) # Finite Random Variable
>>> die_roll = sample(X + Y + Z) 
>>> next(die_roll) 
6
>>> N = Normal('N', 3, 4) # Continuous Random Variable
>>> samp = next(sample(N)) 
>>> samp in N.pspace.domain.set 
True
>>> samp = next(sample(N, N>0)) 
>>> samp > 0 
True
>>> samp_list = next(sample(N, size=4)) 
>>> [sam in N.pspace.domain.set for sam in samp_list] 
[True, True, True, True]
>>> G = Geometric('G', 0.5) # Discrete Random Variable
>>> samp_list = next(sample(G, size=3)) 
>>> samp_list 
array([10,  4,  1])
>>> [sam in G.pspace.domain.set for sam in samp_list] 
[True, True, True]
>>> MN = Normal("MN", [3, 4], [[2, 1], [1, 2]]) # Joint Random Variable
>>> samp_list = next(sample(MN, size=4)) 
>>> samp_list 
array([[4.22564264, 3.23364418],
       [3.41002011, 4.60090908],
       [3.76151866, 4.77617143],
       [4.71440865, 2.65714157]])
>>> [tuple(sam) in MN.pspace.domain.set for sam in samp_list] 
[True, True, True, True]
sympy.stats.sample_iter(expr, condition=None, size=(), library='scipy', numsamples=oo, seed=None, **kwargs)[source]

Returns an iterator of realizations from the expression given a condition

Parameters

expr: Expr

Random expression to be realized

condition: Expr, optional

A conditional expression

size : int, tuple

Represents size of each sample in numsamples

numsamples: integer, optional

Length of the iterator (defaults to infinity)

seed :

An object to be used as seed by the given external library for sampling \(expr\). Following is the list of possible types of object for the supported libraries,

  • ‘scipy’: int, numpy.random.RandomState, numpy.random.Generator

  • ‘numpy’: int, numpy.random.RandomState, numpy.random.Generator

  • ‘pymc3’: int

Optional, by default None, in which case seed settings related to the given library will be used. No modifications to environment’s global seed settings are done by this argument.

Returns

sample_iter: iterator object

iterator object containing the sample/samples of given expr

Examples

>>> from sympy.stats import Normal, sample_iter
>>> X = Normal('X', 0, 1)
>>> expr = X*X + 3
>>> iterator = sample_iter(expr, numsamples=3) 
>>> list(iterator) 
[12, 4, 7]
sympy.stats.factorial_moment(X, n, condition=None, **kwargs)[source]

The factorial moment is a mathematical quantity defined as the expectation or average of the falling factorial of a random variable.

\[factorial-moment(X, n) = E(X(X - 1)(X - 2)...(X - n + 1))\]
Parameters

n: A natural number, n-th factorial moment.

condition : Expr containing RandomSymbols

A conditional expression.

Examples

>>> from sympy.stats import factorial_moment, Poisson, Binomial
>>> from sympy import Symbol, S
>>> lamda = Symbol('lamda')
>>> X = Poisson('X', lamda)
>>> factorial_moment(X, 2)
lamda**2
>>> Y = Binomial('Y', 2, S.Half)
>>> factorial_moment(Y, 2)
1/2
>>> factorial_moment(Y, 2, Y > 1) # find factorial moment for Y > 1
2

References

R847

https://en.wikipedia.org/wiki/Factorial_moment

R848

http://mathworld.wolfram.com/FactorialMoment.html

sympy.stats.kurtosis(X, condition=None, **kwargs)[source]

Characterizes the tails/outliers of a probability distribution.

Kurtosis of any univariate normal distribution is 3. Kurtosis less than 3 means that the distribution produces fewer and less extreme outliers than the normal distribution.

\[kurtosis(X) = E(((X - E(X))/\sigma_X)^{4})\]
Parameters

condition : Expr containing RandomSymbols

A conditional expression. kurtosis(X, X>0) is kurtosis of X given X > 0

Examples

>>> from sympy.stats import kurtosis, Exponential, Normal
>>> from sympy import Symbol
>>> X = Normal('X', 0, 1)
>>> kurtosis(X)
3
>>> kurtosis(X, X > 0) # find kurtosis given X > 0
(-4/pi - 12/pi**2 + 3)/(1 - 2/pi)**2
>>> rate = Symbol('lamda', positive=True, real=True, finite=True)
>>> Y = Exponential('Y', rate)
>>> kurtosis(Y)
9

References

R849

https://en.wikipedia.org/wiki/Kurtosis

R850

http://mathworld.wolfram.com/Kurtosis.html

sympy.stats.skewness(X, condition=None, **kwargs)[source]

Measure of the asymmetry of the probability distribution.

Positive skew indicates that most of the values lie to the right of the mean.

\[skewness(X) = E(((X - E(X))/\sigma_X)^{3})\]
Parameters

condition : Expr containing RandomSymbols

A conditional expression. skewness(X, X>0) is skewness of X given X > 0

Examples

>>> from sympy.stats import skewness, Exponential, Normal
>>> from sympy import Symbol
>>> X = Normal('X', 0, 1)
>>> skewness(X)
0
>>> skewness(X, X > 0) # find skewness given X > 0
(-sqrt(2)/sqrt(pi) + 4*sqrt(2)/pi**(3/2))/(1 - 2/pi)**(3/2)
>>> rate = Symbol('lambda', positive=True, real=True, finite=True)
>>> Y = Exponential('Y', rate)
>>> skewness(Y)
2
sympy.stats.correlation(X, Y, condition=None, **kwargs)[source]

Correlation of two random expressions, also known as correlation coefficient or Pearson’s correlation

The normalized expectation that the two variables will rise and fall together

\[correlation(X,Y) = E((X-E(X))(Y-E(Y)) / (\sigma_x \sigma_y))\]

Examples

>>> from sympy.stats import Exponential, correlation
>>> from sympy import Symbol
>>> rate = Symbol('lambda', positive=True, real=True, finite=True)
>>> X = Exponential('X', rate)
>>> Y = Exponential('Y', rate)
>>> correlation(X, X)
1
>>> correlation(X, Y)
0
>>> correlation(X, Y + rate*X)
1/sqrt(1 + lambda**(-2))
sympy.stats.rv.sampling_density(expr, given_condition=None, library='scipy', numsamples=1, seed=None, **kwargs)[source]

Sampling version of density

sympy.stats.rv.sampling_P(condition, given_condition=None, library='scipy', numsamples=1, evalf=True, seed=None, **kwargs)[source]

Sampling version of P

sympy.stats.rv.sampling_E(expr, given_condition=None, library='scipy', numsamples=1, evalf=True, seed=None, **kwargs)[source]

Sampling version of E

class sympy.stats.Moment(X, n, c=0, condition=None, **kwargs)[source]

Symbolic class for Moment

Examples

>>> from sympy import Symbol, Integral
>>> from sympy.stats import Normal, Expectation, Probability, Moment
>>> mu = Symbol('mu', real=True)
>>> sigma = Symbol('sigma', real=True, positive=True)
>>> X = Normal('X', mu, sigma)
>>> M = Moment(X, 3, 1)

To evaluate the result of Moment use \(doit\):

>>> M.doit()
mu**3 - 3*mu**2 + 3*mu*sigma**2 + 3*mu - 3*sigma**2 - 1

Rewrite the Moment expression in terms of Expectation:

>>> M.rewrite(Expectation)
Expectation((X - 1)**3)

Rewrite the Moment expression in terms of Probability:

>>> M.rewrite(Probability)
Integral((x - 1)**3*Probability(Eq(X, x)), (x, -oo, oo))

Rewrite the Moment expression in terms of Integral:

>>> M.rewrite(Integral)
Integral(sqrt(2)*(X - 1)**3*exp(-(X - mu)**2/(2*sigma**2))/(2*sqrt(pi)*sigma), (X, -oo, oo))
sympy.stats.moment(X, n, c=0, condition=None, *, evaluate=True, **kwargs)[source]

Return the nth moment of a random expression about c.

\[moment(X, c, n) = E((X-c)^{n})\]

Default value of c is 0.

Examples

>>> from sympy.stats import Die, moment, E
>>> X = Die('X', 6)
>>> moment(X, 1, 6)
-5/2
>>> moment(X, 2)
91/6
>>> moment(X, 1) == E(X)
True
class sympy.stats.CentralMoment(X, n, condition=None, **kwargs)[source]

Symbolic class Central Moment

Examples

>>> from sympy import Symbol, Integral
>>> from sympy.stats import Normal, Expectation, Probability, CentralMoment
>>> mu = Symbol('mu', real=True)
>>> sigma = Symbol('sigma', real=True, positive=True)
>>> X = Normal('X', mu, sigma)
>>> CM = CentralMoment(X, 4)

To evaluate the result of CentralMoment use \(doit\):

>>> CM.doit().simplify()
3*sigma**4

Rewrite the CentralMoment expression in terms of Expectation:

>>> CM.rewrite(Expectation)
Expectation((X - Expectation(X))**4)

Rewrite the CentralMoment expression in terms of Probability:

>>> CM.rewrite(Probability)
Integral((x - Integral(x*Probability(True), (x, -oo, oo)))**4*Probability(Eq(X, x)), (x, -oo, oo))

Rewrite the CentralMoment expression in terms of Integral:

>>> CM.rewrite(Integral)
Integral(sqrt(2)*(X - Integral(sqrt(2)*X*exp(-(X - mu)**2/(2*sigma**2))/(2*sqrt(pi)*sigma), (X, -oo, oo)))**4*exp(-(X - mu)**2/(2*sigma**2))/(2*sqrt(pi)*sigma), (X, -oo, oo))
sympy.stats.cmoment(X, n, condition=None, *, evaluate=True, **kwargs)[source]

Return the nth central moment of a random expression about its mean.

\[cmoment(X, n) = E((X - E(X))^{n})\]

Examples

>>> from sympy.stats import Die, cmoment, variance
>>> X = Die('X', 6)
>>> cmoment(X, 3)
0
>>> cmoment(X, 2)
35/12
>>> cmoment(X, 2) == variance(X)
True
class sympy.stats.ExpectationMatrix(expr, condition=None)[source]

Expectation of a random matrix expression.

Examples

>>> from sympy.stats import ExpectationMatrix, Normal
>>> from sympy.stats.rv import RandomMatrixSymbol
>>> from sympy import symbols, MatrixSymbol, Matrix
>>> k = symbols("k")
>>> A, B = MatrixSymbol("A", k, k), MatrixSymbol("B", k, k)
>>> X, Y = RandomMatrixSymbol("X", k, 1), RandomMatrixSymbol("Y", k, 1)
>>> ExpectationMatrix(X)
ExpectationMatrix(X)
>>> ExpectationMatrix(A*X).shape
(k, 1)

To expand the expectation in its expression, use expand():

>>> ExpectationMatrix(A*X + B*Y).expand()
A*ExpectationMatrix(X) + B*ExpectationMatrix(Y)
>>> ExpectationMatrix((X + Y)*(X - Y).T).expand()
ExpectationMatrix(X*X.T) - ExpectationMatrix(X*Y.T) + ExpectationMatrix(Y*X.T) - ExpectationMatrix(Y*Y.T)

To evaluate the ExpectationMatrix, use doit():

>>> N11, N12 = Normal('N11', 11, 1), Normal('N12', 12, 1)
>>> N21, N22 = Normal('N21', 21, 1), Normal('N22', 22, 1)
>>> M11, M12 = Normal('M11', 1, 1), Normal('M12', 2, 1)
>>> M21, M22 = Normal('M21', 3, 1), Normal('M22', 4, 1)
>>> x1 = Matrix([[N11, N12], [N21, N22]])
>>> x2 = Matrix([[M11, M12], [M21, M22]])
>>> ExpectationMatrix(x1 + x2).doit()
Matrix([
[12, 14],
[24, 26]])
class sympy.stats.VarianceMatrix(arg, condition=None)[source]

Variance of a random matrix probability expression. Also known as Covariance matrix, auto-covariance matrix, dispersion matrix, or variance-covariance matrix

Examples

>>> from sympy.stats import VarianceMatrix
>>> from sympy.stats.rv import RandomMatrixSymbol
>>> from sympy import symbols, MatrixSymbol
>>> k = symbols("k")
>>> A, B = MatrixSymbol("A", k, k), MatrixSymbol("B", k, k)
>>> X, Y = RandomMatrixSymbol("X", k, 1), RandomMatrixSymbol("Y", k, 1)
>>> VarianceMatrix(X)
VarianceMatrix(X)
>>> VarianceMatrix(X).shape
(k, k)

To expand the variance in its expression, use expand():

>>> VarianceMatrix(A*X).expand()
A*VarianceMatrix(X)*A.T
>>> VarianceMatrix(A*X + B*Y).expand()
2*A*CrossCovarianceMatrix(X, Y)*B.T + A*VarianceMatrix(X)*A.T + B*VarianceMatrix(Y)*B.T
class sympy.stats.CrossCovarianceMatrix(arg1, arg2, condition=None)[source]

Covariance of a random matrix probability expression.

Examples

>>> from sympy.stats import CrossCovarianceMatrix
>>> from sympy.stats.rv import RandomMatrixSymbol
>>> from sympy import symbols, MatrixSymbol
>>> k = symbols("k")
>>> A, B = MatrixSymbol("A", k, k), MatrixSymbol("B", k, k)
>>> C, D = MatrixSymbol("C", k, k), MatrixSymbol("D", k, k)
>>> X, Y = RandomMatrixSymbol("X", k, 1), RandomMatrixSymbol("Y", k, 1)
>>> Z, W = RandomMatrixSymbol("Z", k, 1), RandomMatrixSymbol("W", k, 1)
>>> CrossCovarianceMatrix(X, Y)
CrossCovarianceMatrix(X, Y)
>>> CrossCovarianceMatrix(X, Y).shape
(k, k)

To expand the covariance in its expression, use expand():

>>> CrossCovarianceMatrix(X + Y, Z).expand()
CrossCovarianceMatrix(X, Z) + CrossCovarianceMatrix(Y, Z)
>>> CrossCovarianceMatrix(A*X , Y).expand()
A*CrossCovarianceMatrix(X, Y)
>>> CrossCovarianceMatrix(A*X, B.T*Y).expand()
A*CrossCovarianceMatrix(X, Y)*B
>>> CrossCovarianceMatrix(A*X + B*Y, C.T*Z + D.T*W).expand()
A*CrossCovarianceMatrix(X, W)*D + A*CrossCovarianceMatrix(X, Z)*C + B*CrossCovarianceMatrix(Y, W)*D + B*CrossCovarianceMatrix(Y, Z)*C

Mechanics

SymPy Stats employs a relatively complex class hierarchy.

RandomDomains are a mapping of variables to possible values. For example, we might say that the symbol Symbol('x') can take on the values \(\{1,2,3,4,5,6\}\).

class sympy.stats.rv.RandomDomain[source]

A PSpace, or Probability Space, combines a RandomDomain with a density to provide probabilistic information. For example the above domain could be enhanced by a finite density {1:1/6, 2:1/6, 3:1/6, 4:1/6, 5:1/6, 6:1/6} to fully define the roll of a fair die named x.

class sympy.stats.rv.PSpace[source]

A RandomSymbol represents the PSpace’s symbol ‘x’ inside of SymPy expressions.

class sympy.stats.rv.RandomSymbol[source]

The RandomDomain and PSpace classes are almost never directly instantiated. Instead they are subclassed for a variety of situations.

RandomDomains and PSpaces must be sufficiently general to represent domains and spaces of several variables with arbitrarily complex densities. This generality is often unnecessary. Instead we often build SingleDomains and SinglePSpaces to represent single, univariate events and processes such as a single die or a single normal variable.

class sympy.stats.rv.SinglePSpace[source]
class sympy.stats.rv.SingleDomain[source]

Another common case is to collect together a set of such univariate random variables. A collection of independent SinglePSpaces or SingleDomains can be brought together to form a ProductDomain or ProductPSpace. These objects would be useful in representing three dice rolled together for example.

class sympy.stats.rv.ProductDomain[source]
class sympy.stats.rv.ProductPSpace[source]

The Conditional adjective is added whenever we add a global condition to a RandomDomain or PSpace. A common example would be three independent dice where we know their sum to be greater than 12.

class sympy.stats.rv.ConditionalDomain[source]

We specialize further into Finite and Continuous versions of these classes to represent finite (such as dice) and continuous (such as normals) random variables.

class sympy.stats.frv.FiniteDomain[source]
class sympy.stats.frv.FinitePSpace[source]
class sympy.stats.crv.ContinuousDomain[source]
class sympy.stats.crv.ContinuousPSpace[source]

Additionally there are a few specialized classes that implement certain common random variable types. There is for example a DiePSpace that implements SingleFinitePSpace and a NormalPSpace that implements SingleContinuousPSpace.

class sympy.stats.frv_types.DiePSpace
class sympy.stats.crv_types.NormalPSpace

RandomVariables can be extracted from these objects using the PSpace.values method.

As previously mentioned SymPy Stats employs a relatively complex class structure. Inheritance is widely used in the implementation of end-level classes. This tactic was chosen to balance between the need to allow SymPy to represent arbitrarily defined random variables and optimizing for common cases. This complicates the code but is structured to only be important to those working on extending SymPy Stats to other random variable types.

Users will not use this class structure. Instead these mechanics are exposed through variable creation functions Die, Coin, FiniteRV, Normal, Exponential, etc…. These build the appropriate SinglePSpaces and return the corresponding RandomVariable. Conditional and Product spaces are formed in the natural construction of SymPy expressions and the use of interface functions E, Given, Density, etc….

sympy.stats.Die()
sympy.stats.Normal()

There are some additional functions that may be useful. They are largely used internally.

sympy.stats.rv.random_symbols(expr)[source]

Returns all RandomSymbols within a SymPy Expression.

sympy.stats.rv.pspace(expr)[source]

Returns the underlying Probability Space of a random expression.

For internal use.

Examples

>>> from sympy.stats import pspace, Normal
>>> X = Normal('X', 0, 1)
>>> pspace(2*X + 1) == X.pspace
True
sympy.stats.rv.rs_swap(a, b)[source]

Build a dictionary to swap RandomSymbols based on their underlying symbol.

i.e. if X = ('x', pspace1) and Y = ('x', pspace2) then X and Y match and the key, value pair {X:Y} will appear in the result

Inputs: collections a and b of random variables which share common symbols Output: dict mapping RVs in a to RVs in b