Template:Infobox probability distribution/testcases
File:Gnome-applications-science.svg | This is the template test cases page for the sandbox of Template:Infobox probability distribution. to update the examples. If there are many examples of a complicated template, later ones may break due to limits in MediaWiki; see the HTML comment "NewPP limit report" in the rendered page. You can also use Special:ExpandTemplates to examine the results of template uses. You can test how this page looks in the different skins with these links: |
Normal distribution
{{Infobox probability distribution}} | {{Infobox probability distribution/sandbox}} | ||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
e^{-\frac{(x - \mu)^2}{2 \sigma^2}}</math>
| cdf = <math>\frac{1}{2}\left[1 + \operatorname{erf}\left( \frac{x-\mu}{\sigma\sqrt{2}}\right)\right] </math> | quantile = <math>\mu+\sigma\sqrt{2} \operatorname{erf}^{-1}(2p-1)</math> | mean = <math>\mu</math> | median = <math>\mu</math> | mode = <math>\mu</math> | variance = <math>\sigma^2</math> | mad = <math>\sqrt{2/\pi}\sigma</math> | skewness = <math>0</math> | kurtosis = <math>0</math> | entropy = <math>\frac{1}{2} \log(2\pi e\sigma^2)</math> | mgf = <math>\exp(\mu t + \sigma^2t^2/2)</math> | char = <math>\exp(i\mu t - \sigma^2 t^2/2)</math> | fisher = <math>\mathcal{I}(\mu,\sigma) =\begin {pmatrix} 1/\sigma^2 & 0 \\ 0 & 2/\sigma^2\end{pmatrix}</math>
<math>\mathcal{I}(\mu,\sigma^2) =\begin {pmatrix} 1/\sigma^2 & 0 \\ 0 & 1/(2\sigma^4)\end{pmatrix}</math>
| KLDiv = <math>D_\text{KL}(\mathcal{N}_0 \| \mathcal{N}_1) = { 1 \over 2 } \{ (\sigma_0/\sigma_1)^2 + \frac{(\mu_1 - \mu_0)^2}{\sigma_1^2} - 1 + 2 \ln {\sigma_1 \over \sigma_0} \}</math>
}}
Binomial distribution
{{Infobox probability distribution}} | {{Infobox probability distribution/sandbox}} | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
</math>
| kurtosis = <math>\frac{1-6p(1-p)}{np(1-p)}</math> | entropy = <math>\frac{1}{2} \log_2 \left( 2\pi enp(1-p) \right) + O \left( \frac{1}{n} \right)</math>
in shannons. For nats, use the natural log in the log. | mgf = <math>(1-p + pe^t)^n</math> | char = <math>(1-p + pe^{it})^n</math> | pgf = <math>G(z) = [(1-p) + pz]^n</math> | fisher = <math> g_n(p) = \frac{n}{p(1-p)} </math>
(for fixed <math>n</math>)
}}
Geometric distribution
{{Infobox probability distribution}} | {{Infobox probability distribution/sandbox}} | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
</math>
| kurtosis = <math>6+\frac{p^2}{1-p}</math>
| entropy = <math>\tfrac{-(1-p)\log_2 (1-p) - p \log_2 p}{p}</math>
| mgf = <math>\frac{pe^t}{1-(1-p) e^t},</math>
for <math>t<-\ln(1-p)</math>
| char = <math>\frac{pe^{it}}{1-(1-p)e^{it}}</math>
| parameters2 = <math>0< p \leq 1</math> success probability (real)
| support2 = k failures where <math>k \in \{0,1,2,3,\dots\}</math>
| pdf2 = <math>(1 - p)^k p</math>
| cdf2 = <math>1-(1 - p)^{k+1}</math>
| mean2 = <math>\frac{1-p}{p}</math>
| median2 = <math>\left\lceil \frac{-1}{\log_2(1-p)} \right\rceil - 1</math>
(not unique if <math>-1/\log_2(1-p)</math> is an integer)
| mode2 = <math>0</math>
| variance2 = <math>\frac{1-p}{p^2}</math>
| skewness2 = <math>\frac{2-p}{\sqrt{1-p}}</math>
| kurtosis2 = <math>6+\frac{p^2}{1-p}</math>
| entropy2 = <math>\tfrac{-(1-p)\log_2 (1-p) - p \log_2 p}{p}</math>
| mgf2 = <math>\frac{p}{1-(1-p)e^t}</math>
| char2 = <math>\frac{p}{1-(1-p)e^{it}}</math>
}}
Gamma distribution
{{Infobox probability distribution}} | {{Infobox probability distribution/sandbox}} | ||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
</math>
| cdf =<math>\frac{1}{\Gamma(k)} \gamma\left(k, \frac{x}{\theta}\right)</math> | mean =<math>\operatorname{E}[X] = k \theta </math> | median =No simple closed form | mode =<math>(k - 1)\theta \text{ for } k \geq 1</math> | variance =<math>\operatorname{Var}(X) = k \theta^2</math> | skewness =<math>\frac{2}{\sqrt{k}}</math> | kurtosis =<math>\frac{6}{k}</math> | entropy =<math>\begin{align}
k &+ \ln\theta + \ln\Gamma(k)\\ &+ (1 - k)\psi(k) \end{align}</math>
| mgf =<math>(1 - \theta t)^{-k} \text{ for } t < \frac{1}{\theta}</math> | char =<math>(1 - \theta it)^{-k}</math> | parameters2 =
| support2 =<math>x \in (0, \infty)</math> | pdf2 =<math>\frac{\beta^\alpha}{\Gamma(\alpha)} x^{\alpha - 1} e^{-\beta x }</math> | cdf2 =<math>\frac{1}{\Gamma(\alpha)} \gamma(\alpha, \beta x)</math> | mean2 =<math>\operatorname{E}[X] = \frac{\alpha}{\beta}</math> | median2 =No simple closed form | mode2 =<math>\frac{\alpha - 1}{\beta} \text{ for } \alpha \geq 1</math> | variance2 =<math>\operatorname{Var}(X) = \frac{\alpha}{\beta^2}</math> | skewness2 =<math>\frac{2}{\sqrt{\alpha}}</math> | kurtosis2 =<math>\frac{6}{\alpha}</math> | entropy2 =<math>\begin{align}
\alpha &- \ln \beta + \ln\Gamma(\alpha)\\ &+ (1 - \alpha)\psi(\alpha) \end{align}</math>
| mgf2 =<math>\left(1 - \frac{t}{\beta}\right)^{-\alpha} \text{ for } t < \beta</math> | char2 =<math>\left(1 - \frac{it}{\beta}\right)^{-\alpha}</math> }}