Template:Infobox probability distribution/testcases

From Black Cat Studios
< Template:Infobox probability distribution
Revision as of 00:39, 20 June 2023 by Blackcat (talk | contribs) (1 revision imported)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

Normal distribution

Side by side comparison
{{Infobox probability distribution}}{{Infobox probability distribution/sandbox}}
Normal Distribution
Probability density function
Probability density function for the normal distribution
The red curve is the standard normal distribution
Cumulative distribution function
Cumulative distribution function for the normal distribution
Notation <math>\mathcal{N}(\mu,\sigma^2)</math>
Parameters <math>\mu\in\R</math> = mean (location)
<math>\sigma^2>0</math> = variance (squared scale)
Support <math>x\in\R</math>
PDF <math>\frac{1}{\sqrt{2\pi\sigma^2
Normal Distribution
Probability density function
Probability density function for the normal distribution
The red curve is the standard normal distribution
Cumulative distribution function
Cumulative distribution function for the normal distribution
Notation <math>\mathcal{N}(\mu,\sigma^2)</math>
Parameters <math>\mu\in\R</math> = mean (location)
<math>\sigma^2>0</math> = variance (squared scale)
Support <math>x\in\R</math>
PDF <math>\frac{1}{\sqrt{2\pi\sigma^2

e^{-\frac{(x - \mu)^2}{2 \sigma^2}}</math>

 | cdf        = <math>\frac{1}{2}\left[1 + \operatorname{erf}\left( \frac{x-\mu}{\sigma\sqrt{2}}\right)\right] </math>
 | quantile   = <math>\mu+\sigma\sqrt{2} \operatorname{erf}^{-1}(2p-1)</math>
 | mean       = <math>\mu</math>
 | median     = <math>\mu</math>
 | mode       = <math>\mu</math>
 | variance   = <math>\sigma^2</math>
 | mad        = <math>\sqrt{2/\pi}\sigma</math>
 | skewness   = <math>0</math>
 | kurtosis   = <math>0</math> 
 | entropy    = <math>\frac{1}{2} \log(2\pi e\sigma^2)</math>
 | mgf        = <math>\exp(\mu t + \sigma^2t^2/2)</math>
 | char       = <math>\exp(i\mu t - \sigma^2 t^2/2)</math>
 | fisher     = <math>\mathcal{I}(\mu,\sigma) =\begin {pmatrix} 1/\sigma^2 & 0 \\ 0 & 2/\sigma^2\end{pmatrix}</math>

<math>\mathcal{I}(\mu,\sigma^2) =\begin {pmatrix} 1/\sigma^2 & 0 \\ 0 & 1/(2\sigma^4)\end{pmatrix}</math>

 | KLDiv = <math>D_\text{KL}(\mathcal{N}_0 \| \mathcal{N}_1) = { 1 \over 2 } \{ (\sigma_0/\sigma_1)^2 + \frac{(\mu_1 - \mu_0)^2}{\sigma_1^2} - 1 + 2 \ln {\sigma_1 \over \sigma_0} \}</math>

}}

Binomial distribution

Side by side comparison
{{Infobox probability distribution}}{{Infobox probability distribution/sandbox}}
Binomial distribution
Probability mass function
Probability mass function for the binomial distribution
Cumulative distribution function
Cumulative distribution function for the binomial distribution
Notation <math>B(n,p)</math>
Parameters <math>n \in \{0, 1, 2, \ldots\}</math> – number of trials
<math>p \in [0,1]</math> – success probability for each trial
Support <math>k \in \{0, 1, \ldots, n\}</math> – number of successes
PMF <math>\binom{n}{k} p^k (1-p)^{n-k}</math>
CDF <math>I_{1-p}(n - k, 1 + k)</math>
Mean <math>np</math>
Median <math>\lfloor np \rfloor</math> or <math>\lceil np \rceil</math>
Mode <math>\lfloor (n + 1)p \rfloor</math> or <math>\lceil (n + 1)p \rceil - 1</math>
Variance <math>np(1 - p)</math>
Skewness <math>\frac{1-2p}{\sqrt{np(1-p)
Binomial distribution
Probability mass function
Probability mass function for the binomial distribution
Cumulative distribution function
Cumulative distribution function for the binomial distribution
Notation <math>B(n,p)</math>
Parameters <math>n \in \{0, 1, 2, \ldots\}</math> – number of trials
<math>p \in [0,1]</math> – success probability for each trial
Support <math>k \in \{0, 1, \ldots, n\}</math> – number of successes
PMF <math>\binom{n}{k} p^k (1-p)^{n-k}</math>
CDF <math>I_{1-p}(n - k, 1 + k)</math>
Mean <math>np</math>
Median <math>\lfloor np \rfloor</math> or <math>\lceil np \rceil</math>
Mode <math>\lfloor (n + 1)p \rfloor</math> or <math>\lceil (n + 1)p \rceil - 1</math>
Variance <math>np(1 - p)</math>
Skewness <math>\frac{1-2p}{\sqrt{np(1-p)

</math>

 | kurtosis   = <math>\frac{1-6p(1-p)}{np(1-p)}</math>
 | entropy    = <math>\frac{1}{2} \log_2 \left( 2\pi enp(1-p) \right) + O \left( \frac{1}{n} \right)</math>
in shannons. For nats, use the natural log in the log. | mgf = <math>(1-p + pe^t)^n</math> | char = <math>(1-p + pe^{it})^n</math> | pgf = <math>G(z) = [(1-p) + pz]^n</math> | fisher = <math> g_n(p) = \frac{n}{p(1-p)} </math>
(for fixed <math>n</math>)

}}

Geometric distribution

Side by side comparison
{{Infobox probability distribution}}{{Infobox probability distribution/sandbox}}
Geometric
Probability mass function
File:Geometric pmf.svg
Cumulative distribution function
File:Geometric cdf.svg
Parameters <math>0< p < 1</math> success probability (real)
Support k trials where <math>k \in \{1,2,3,\dots\}</math>
PMF <math>(1 - p)^{k-1}p</math>
CDF <math>1-(1 - p)^k</math>
Mean <math>\frac{1}{p}</math>
Median

<math>\left\lceil \frac{-1}{\log_2(1-p)} \right\rceil</math>

(not unique if <math>-1/\log_2(1-p)</math> is an integer)
Mode <math>1</math>
Variance <math>\frac{1-p}{p^2}</math>
Skewness <math>\frac{2-p}{\sqrt{1-p
Geometric
Probability mass function
File:Geometric pmf.svg
Cumulative distribution function
File:Geometric cdf.svg
Parameters <math>0< p < 1</math> success probability (real)
Support k trials where <math>k \in \{1,2,3,\dots\}</math>
PMF <math>(1 - p)^{k-1}p</math>
CDF <math>1-(1 - p)^k</math>
Mean <math>\frac{1}{p}</math>
Median

<math>\left\lceil \frac{-1}{\log_2(1-p)} \right\rceil</math>

(not unique if <math>-1/\log_2(1-p)</math> is an integer)
Mode <math>1</math>
Variance <math>\frac{1-p}{p^2}</math>
Skewness <math>\frac{2-p}{\sqrt{1-p

</math>

| kurtosis = <math>6+\frac{p^2}{1-p}</math> | entropy = <math>\tfrac{-(1-p)\log_2 (1-p) - p \log_2 p}{p}</math> | mgf = <math>\frac{pe^t}{1-(1-p) e^t},</math>
for <math>t<-\ln(1-p)</math> | char = <math>\frac{pe^{it}}{1-(1-p)e^{it}}</math> | parameters2 = <math>0< p \leq 1</math> success probability (real) | support2 = k failures where <math>k \in \{0,1,2,3,\dots\}</math> | pdf2 = <math>(1 - p)^k p</math> | cdf2 = <math>1-(1 - p)^{k+1}</math> | mean2 = <math>\frac{1-p}{p}</math> | median2 = <math>\left\lceil \frac{-1}{\log_2(1-p)} \right\rceil - 1</math>
(not unique if <math>-1/\log_2(1-p)</math> is an integer) | mode2 = <math>0</math> | variance2 = <math>\frac{1-p}{p^2}</math> | skewness2 = <math>\frac{2-p}{\sqrt{1-p}}</math> | kurtosis2 = <math>6+\frac{p^2}{1-p}</math> | entropy2 = <math>\tfrac{-(1-p)\log_2 (1-p) - p \log_2 p}{p}</math> | mgf2 = <math>\frac{p}{1-(1-p)e^t}</math> | char2 = <math>\frac{p}{1-(1-p)e^{it}}</math> }}

Gamma distribution

Side by side comparison
{{Infobox probability distribution}}{{Infobox probability distribution/sandbox}}
Gamma
Probability density function
Probability density plots of gamma distributions
Cumulative distribution function
Cumulative distribution plots of gamma distributions
Parameters
Support <math>x \in (0, \infty)</math>
PDF <math>\frac{1}{\Gamma(k) \theta^k} x^{k - 1} e^{-\frac{x}{\theta
Gamma
Probability density function
Probability density plots of gamma distributions
Cumulative distribution function
Cumulative distribution plots of gamma distributions
Parameters
Support <math>x \in (0, \infty)</math>
PDF <math>\frac{1}{\Gamma(k) \theta^k} x^{k - 1} e^{-\frac{x}{\theta

</math>

| cdf =<math>\frac{1}{\Gamma(k)} \gamma\left(k, \frac{x}{\theta}\right)</math> | mean =<math>\operatorname{E}[X] = k \theta </math> | median =No simple closed form | mode =<math>(k - 1)\theta \text{ for } k \geq 1</math> | variance =<math>\operatorname{Var}(X) = k \theta^2</math> | skewness =<math>\frac{2}{\sqrt{k}}</math> | kurtosis =<math>\frac{6}{k}</math> | entropy =<math>\begin{align}

                     k &+ \ln\theta + \ln\Gamma(k)\\
                       &+ (1 - k)\psi(k)
                   \end{align}</math>

| mgf =<math>(1 - \theta t)^{-k} \text{ for } t < \frac{1}{\theta}</math> | char =<math>(1 - \theta it)^{-k}</math> | parameters2 =

| support2 =<math>x \in (0, \infty)</math> | pdf2 =<math>\frac{\beta^\alpha}{\Gamma(\alpha)} x^{\alpha - 1} e^{-\beta x }</math> | cdf2 =<math>\frac{1}{\Gamma(\alpha)} \gamma(\alpha, \beta x)</math> | mean2 =<math>\operatorname{E}[X] = \frac{\alpha}{\beta}</math> | median2 =No simple closed form | mode2 =<math>\frac{\alpha - 1}{\beta} \text{ for } \alpha \geq 1</math> | variance2 =<math>\operatorname{Var}(X) = \frac{\alpha}{\beta^2}</math> | skewness2 =<math>\frac{2}{\sqrt{\alpha}}</math> | kurtosis2 =<math>\frac{6}{\alpha}</math> | entropy2 =<math>\begin{align}

                     \alpha &- \ln \beta + \ln\Gamma(\alpha)\\
                            &+ (1 - \alpha)\psi(\alpha)
                   \end{align}</math>

| mgf2 =<math>\left(1 - \frac{t}{\beta}\right)^{-\alpha} \text{ for } t < \beta</math> | char2 =<math>\left(1 - \frac{it}{\beta}\right)^{-\alpha}</math> }}