Being reliant on the roll of the dice, actions and outcomes in Dungeons and Dragons are fundamentally tied to probability and statistics. In this article I explore how the sums of dice are related to discrete probability distributions, building up from the case of a single die up to combining any arbitrary set of dice. The quantitative tools we develop will be sufficient to make basic assessments on the (un)likeliness of a given observed outcome.

`\[ \newcommand{\given}{\:\lvert\:\mathopen} \newcommand{\Prob}[1]{\mathbb{P}\left( #1 \right)} \newcommand{\dd}{\mathrm{d}} \]`

## Introduction¶

As a pervasive user of dice, Dungeons and Dragons (D&D) has an inherent
probabilistic and statistical quality to the ways in which game play
evolves.
Dice are relatively simple objects, and (at least in my experience) exposure
to basic concepts in probability made use of them in grade school.
Of course, the kinds of analysis and insights made with with less than high
school level math are rather limited.
Therefore, I thought it would be interesting to explore *slightly* more
complex ideas in statistics and probability through particular concepts
encountered in D&D game play.

For instance, health points (HP) are accumulated over time as a character levels up, and sometimes it seems player characters end up with either lower or higher than expected HP totals, and it’d be interesting to quantify just how aberrant or not the situation is. In this article, I will build up the statistical tools necessary to make such quantitative assessments, reviewing some basics of probability distributions and how to work with them. In an optional appendix, I’ll also take a deeper dive into a related mathematical field which can make the numerical analysis “easier” at the expense of requiring more sophisticated mathematical tools.

One note on notation: if you’re here but unfamiliar with D&D, a convention
used to describe die with different numbers of faces is to say “`\(\dd\)`

” and then
the number of faces, i.e. a 6-sided die would be called a “`\(\dd6\)`

”.
This then makes it easy to talk about multiple dice by prefixing the die
designation with a number — “`\(4\dd6\)`

” means four 6-sided dice used in combination
in some way.
This notation becomes especially useful when combinations of different die
plus bonuses are used, i.e. “`\(2\dd8 + 1\dd6 + 4\)`

” is much faster and easier to
read than “two 8-sided dice plus a 6-sided die plus four”.

## One die¶

Hopefully almost everyone already knows how to describe the probability of getting any value on a single die roll — you are equally likely to get any one of the faces — but let’s take a moment to connect this known statement and rewrite it mathematically. Doing so will help connect the mathematical notation and formulas we’ll need for more complex scenarios with the simplest and easy-to-reason-about case that everyone should be familiar with.

Transcribing the statement of “equally likely to get any one face” into math
requires defining a few quantities first.
Let `\(N \in \mathbb{Z}^+\)`

be some positive integer that describes the number of
faces a die has, and then let `\(r\)`

be the value of some roll of the given
die such that `\(r \in \mathbb{Z}^+\)`

is also a positive integer.
Then we can write the probability `\(\Prob{r \in \dd N}\)`

of getting a
particular roll `\(r\)`

given a `\(\dd N\)`

die as the function
`\begin{align} \Prob{r \in \dd N} = \begin{cases} \frac{1}{N} & 1 \le r \le N \\ 0 & \text{otherwise} \end{cases} \label{eqn:Pdie} \end{align}`

i.e. there is equal probability for `\(1 \le r \le N\)`

and no probability of
rolling a value that is not on the die.

Note that we have added one additional constraight: not only is the probability
equal for all faces equal, but we must “conserve probability” which means
that the sum of the probabilities over all values must be equal to `\(1\)`

.
For an `\(N\)`

-sided die, that translate to each face contributes `\(1/N\)`

-th of the
total probability in the sum.

Figure 2.1 plots the probability distribution using this definition for
both a `\(\dd6\)`

and `\(\dd12\)`

die:

I’ll be using this style of plot throughout the rest of this article, so it’s worth getting to know well. Because die take on discrete, integer values, we have a discrete probability distribution, and to make that visually clear I’m using a stem plot rather than a simple line plot. The height of each colored bar is equal to the probability of getting the particular value matching its horizontal position.

This style of plot also makes the conservation of probability visually
evident: adding more faces requires that the height of each bar must
correspondingly decrease.
A `\(\dd12\)`

die with twice as many faces as a `\(\dd6\)`

has exactly half the probability of
landing on values `\(1\)`

through `\(6\)`

compared to the `\(\dd6\)`

(because there are also the
possible values `\(7\)`

through `\(12\)`

that a `\(\dd12\)`

can land on).

With the basics established, we can now move on to tackling the case of multiple dice.

## Two dice¶

While rolling a single die is very common, we also want to understand the
probability distributions of rolling multiple dice for an outcome.
We will again start with a simple case that many people seem to know: rolling
`\(2\dd6\)`

, the most likely outcome is that the total is `\(7\)`

.

This fact can checked relatively easily by hand with just a bit of brute force
trial and error.
At the extremes, there is only one way each to achieve the minimum (`\(2\)`

) or
maximum (`\(12\)`

) sums:

2 |
---|

⚀ + ⚀ |

12 |
---|

⚅ + ⚅ |

In contrast, there are 6 ways to get a total of 7:

7 | |
---|---|

⚀ + ⚅ | ⚅ + ⚀ |

⚁ + ⚄ | ⚄ + ⚁ |

⚂ + ⚃ | ⚃ + ⚂ |

(You should be able to easily convince yourself that no other total will have as many
combinations.)
Given `\(2\dd6\)`

, there are `\(6 \times 6 = 36\)`

possible
permutations, so that makes the probability
of getting a two or twelve `\(1/36 \approx 2.77\%\)`

, while the maximum likelihood is for the
value `\(7\)`

with `\(6/36 = 1/6 \approx 16.66\%\)`

probability.

For only two dice, the brute-force solution is relatively simple, especially when the die have relatively few faces, but this obviously isn’t going to scale very well. We need a more mathematical and systematic approach than to just list every combination explicitly.

The key to generalizing this problem is to have the correct viewpoint on the problem.
Rather than asking the direct question “what fraction of possible combinations of two
dice sum to `\(S\)`

?”, let us instead ask the slightly simpler question “given the first
die rolled `\(r_1\)`

, what is the probability that the second roll `\(r_2\)`

will sum to
`\(r_1 + r_2 = S\)`

?”.

You might now ask, “How does this help us?”

Since the sum `\(S\)`

is known as well as the first die roll `\(r_1\)`

, we can solve for the
one remaining unknown, the roll `\(r_2\)`

:
`\begin{align*} r_2 = S - r_1 \end{align*}`

The probability distribution of rolls for this second die, though, is just the
distribution in Eqn. `\(\ref{eqn:Pdie}\)`

:
`\begin{align} \Prob{r_2 \in \dd N} = \Prob{S - r_1 \in \dd N} \end{align}`

This is then the probability of rolling the second die as exactly the value required
to obtain `\(S\)`

(if possible) given the value of the first roll.

With the probabilities `\(\Prob{r_1 \in \dd N}\)`

and `\(\Prob{S - r_1 \in \dd N}\)`

both known, the probability of rolling `\(r_1\)`

and then immediately rolling
exactly `\(r_2 = S - r_1\)`

is the product of the two per-event probabilities:
`\begin{align} \Prob{S \in 2\dd N \given r_1} &= \Prob{r_1 \in \dd N} \times \Prob{S - r_1 \in \dd N} \label{eqn:joint_2dN} \\ {} &= \begin{cases} \frac{1}{N^2} & 1 \le r_1 \le N \text{ and } 1 \le S-r_1 \le N \\ 0 & \text{otherwise} \end{cases} \nonumber \end{align}`

This is now the answer to the second, simpler question we posed:
the probability of rolling the required value `\(r_2\)`

given a previous roll of `\(r_1\)`

that
sums to `\(S\)`

is `\(1/N^2\)`

(assuming the desired sum `\(S\)`

is possible, i.e. `\(2 \le S \le 2N\)`

).
(Again, this answer about the probability of getting two specific dice rolls is hopefully
something you are already familiar, even if the notation is new.)

Now we are ready to generalize back to the first question of what the probability of a total
`\(S\)`

is given any two combinations of die.
We get that answer by simply freeing the restriction that the first die roll be `\(r_1\)`

and
summing the probabilities of all possible answers:
`\begin{align} \Prob{S \in 2\dd N} &= \sum_{r_1} \Prob{r_1 \in \dd N} \Prob{S - r_1 \in \dd N} \end{align}`

Be mindful of a subtle distinction here. We are adding add up the probabilities of specific events that are drawn from a known joint probability distribution; we are

notadding the probability distributions themselves.

We can show that this formula gives the same result as our by-hand cases earlier by just
running through all possible values of `\(r_1\)`

.
Most of the terms in the infinite series are `\(0\)`

(when `\(r_1 < 1\)`

or `\(r_1 > 6\)`

), so only a
couple of terms will contribute:
`\begin{align*} \Prob{2 \in 2\dd 6} &= \underbrace{\ldots}_{0} + \underbrace{\Prob{0\in\dd 6} \, \Prob{2\in\dd 6}}_{0} + \underbrace{\Prob{1\in\dd 6} \, \Prob{1\in\dd 6}}_{1/36} + \underbrace{\Prob{2\in\dd 6} \, \Prob{0\in\dd 6}}_{0} + \underbrace{\ldots}_{0} = \frac{1}{36} \\ \Prob{7 \in 2\dd 6} &= \underbrace{\ldots}_{0} + \underbrace{\Prob{0\in\dd 6} \, \Prob{7\in\dd 6}}_{0} + \underbrace{\Prob{1\in\dd 6} \, \Prob{6\in\dd 6}}_{1/36} + \underbrace{\Prob{2\in\dd 6} \, \Prob{5\in\dd 6}}_{1/36} \\ {}&\,\quad \hphantom{\underbrace{\ldots}_{0}} + \underbrace{\Prob{3\in\dd 6} \, \Prob{4\in\dd 6}}_{1/36} + \underbrace{\Prob{4\in\dd 6} \, \Prob{3\in\dd 6}}_{1/36} + \underbrace{\Prob{5\in\dd 6} \, \Prob{2\in\dd 6}}_{1/36} \\ {}&\,\quad \hphantom{\underbrace{\ldots}_{0}} + \underbrace{\Prob{6\in\dd 6} \, \Prob{1\in\dd 6}}_{1/36} + \underbrace{\Prob{7\in\dd 6} \, \Prob{0\in\dd 6}}_{0} + \underbrace{\ldots}_{0} = \smash{\frac{1}{6}} \\ \Prob{12 \in 2\dd 6} &= \underbrace{\ldots}_{0} + \underbrace{\Prob{5\in\dd 6} \, \Prob{7\in\dd 6}}_{0} + \underbrace{\Prob{6\in\dd 6} \, \Prob{6\in\dd 6}}_{1/36} + \underbrace{\Prob{7\in\dd 6} \, \Prob{5\in\dd 6}}_{0} + \underbrace{\ldots}_{0} = \frac{1}{36} \end{align*}`

(Note that this is why it is convenient to explicitly define the probability function over
all integers — the summation can conveniently ignore what the valid range of values is
because the function will just be `\(0\)`

for invalid inputs.)

It is then quite simple to turn Eqn. `\(\ref{eqn:joint_2dN}\)`

into a function to calculate
the total probability numerically.
For the simple case of two dice with the same number of sides `\(N\)`

, the following function
will return the probability (as a rational number) of obtaining the total `\(S\)`

:

```
function prob_sum_2dN(N::Integer, S::Integer)
P = zero(S)
for r₁ in 1:N
r₂ = S - r₁
1 ≤ r₂ ≤ N || continue
P += 1
end
return P // N^2
end
```

Evaluating the function for the probability of having a total of `\(7\)`

given a `\(2\dd6\)`

roll,

```
prob_sum_2dN(6, 7) = 1//6
```

we find the expected `\(1/6 \approx 16.66\%\)`

probability.

We can also go one step further and calculate the probability distribution for
all possible rolls (spanning the range `\(2\)`

through `\(12\)`

); the result is shown
in Fig. 3.1.

This should match with your intuition — rolls near the middle of the range are more likely than either extreme.

## Multiplie Dice¶

While now having the ability to calculate the probability distribution for two dice (with the same number of sides) is a good start, our ultimate goal is to be able to calculate the distribution for any number of dice with any combination of faces. To do that, we’ll need to generalize the 2-dice case a bit more.

Depending on your background, the form of Eqn. `\(\ref{eqn:joint_2dN}\)`

might
be either highly suggestive or just an opaque formula.
For a physicist (or any scientist or engineer) like me, hopefully you
recognize that the equation is a
discrete convolution:
`\begin{align} (f * g)(x) &\equiv \sum_z f(z) g(x - z) \label{eqn:defn_conv} \end{align}`

for some arbitrary discrete functions `\(f(x)\)`

and `\(g(x)\)`

.
This is a useful fact because that allows us to immediately generalize to
any number of dice with any configuration of faces because:

convolutions are commutative in the order of their arguments —

`\(f * g = g * f\)`

— which means we do not have to worry about which die’s probability is the “given” (i.e.`\(\Prob{r_1 \in \dd N}\)`

) and which one is “solved for” (i.e.`\(\Prob{S - r_1 \in \dd N}\)`

).convolutions are associative —

`\(f * (g * h) = (f * g) * h\)`

— which means that we can extend the number of die rolls by simply convolving another die’s uniform probability distribution with the result of an arbitrary number of earlier convolutions.

As a matter of notation, as used in Eqn.

`\(\ref{eqn:defn_conv}\)`

, it is typical in functional convolutions to use the syntax`\((f * g)(x)\)`

since it is more compact to write than the explicit summation. The notation typically used for probabilities, though, makes this shortcut cumbersome since`\(\Prob{S \in \dd N}\)`

places both the random variable`\(S\)`

and the source distribution`\(\dd N\)`

within the parenthesis. To avoid some potential confusion, I’m not going to try to mix the two notations — instead of writing`\(\left[ \Prob{\dd N} * \Prob{\dd N'} \right](S)\)`

or something like that, I will instead prefer to just write the convolution via its summation definition.

To turn this into a concrete example from D&D, say you want to know the
probability distribution for `\(2\dd6\)`

damage rolls for a
Greatsword attack.
We already calculated the probability distribution for that case to be
`\begin{align} \Prob{S \in 2\dd6} &= \sum_{r} \Prob{r \in \dd6} \, \Prob{S - r \in \dd6} \end{align}`

with the distribution plotted in Fig. 3.1.
Now let us compare that to the case where you also add an extra `\(1\dd4\)`

damage from something like
Divine Favor.
Given the convolution properties we just asserted, the desired probability
distribution `\(\Prob{S \in 2\dd6 + 1\dd4}\)`

can be computed by convolving
`\(\Prob{S \in 2\dd6}\)`

and `\(\Prob{S \in 1\dd4}\)`

:
`\begin{align} \Prob{S \in 2\dd6 + 1\dd4} &= \sum_r \Prob{r \in 2\dd6} \, \Prob{S - r \in 1\dd4} \end{align}`

Trying to work out the explicit answer symbolically to this series of convolutions is messy and tedious at best, but calculating the probabilities numerically is quite easy.

The first component we need is a simple helper function to just return the single-die probability distribution:

```
using OffsetArrays
# Returns uniform distribution 1/N on the range of values 1 through N
prob_dN(N::Integer) = fill!(zeros(1:N), 1/N)
```

The second half is to define a function which performs the convolution of two
given probability distributions.
As an implementation detail, be sure to note that indexing must be carefully
handled.
We could get by with normal Julia `Array`

s with no real significant change
because the first valid die value is `\(1\)`

which lines up with the 1-indexing
of Julia’s arrays, but replicating this in C or Python where arrays are
0-indexed would require some additional `±1`

factors.
(We actually side-step that issue completely by taking advantage of the
`OffsetArrays.jl`

package
to define the probability distributions in arrays that carry their own aribitrary
index ranges.)

```
function conv(P₁::AbstractVector, P₂::AbstractVector)
# Valid indices (i.e. roll sums) of the input distributions
ax₁ = axes(P₁, 1)
ax₂ = axes(P₂, 1)
# output index range is simple function input indices
newax = (first(ax₁) + first(ax₂)):(last(ax₁) + last(ax₂))
Pc = zeros(eltype(P₁), newax) # combined probability distribution
# Then just loop over the range of possible output indices...
for S in newax
# ...and for each one, calculate the convolved summation.
for r₁ in ax₁
r₂ = S - r₁
if r₂ ∉ ax₂
# Skip to the next iteration if r₂ is outside P₂'s range
# of indices, which corresponds to 0-probability
continue
end
Pc[S] += P₁[r₁] * P₂[r₂]
end
end
return Pc
end
```

With these two function defined, we can then just build up the total
probability distributions for the `\(2\dd6\)`

and `\(2\dd6 + 1\dd4\)`

dice rolls:

```
P1d4 = prob_dN(4) # d4 probability distribution
P1d6 = prob_dN(6) # d6 probability distribution
P2d6 = conv(P1d6, P1d6) # convolved 2d6 distribution
P2d6_1d4 = conv(P2d6, P1d4) # convolved 2d6 + 1d4 distribution
```

Then plotting both distributions against one another:

The distributions should qualitatively follow your intuition — adding another die
necessarily gives you access to higher totals, raises the minimum possible total,
and moves the most likely result to greater values as well.
With a numerical calculation, though, we can make those qualitative assessment
quantitative.
For instance, it’s now possible to say that achieving the maximum total damage
of `\(16\)`

occurs with less than 1% probability:

```
P2d6_1d4[16] = 0.006944444444444444
```

The final minor extension required to calculate typical D&D dice rolls is the ability to add an arbitrary bonus. Hopefully, though, the solution is obvious — adding a constant just shifts the entire distribution left or right (negative and positive bonuses, respectively), which can be easily realized as a reinterpretation of the axes. As an exercise for the reader, though, the shift can also be accomplished by convolving the dice roll distribution with an appropriately defined Kronecker delta function centered at the bonus value.

## Conclusions¶

In this article we have seen how to combine the uniform probability distribution for die rolls (on a finite interval) can be combined via convolutions to produce the probability distribution for an arbitrary combination of dice. To this point, we have considered only a few dice, though, while in the introduction I gave the example of HP rolls over many dice as a motivation for being able to quantitatively assessing the likelihood of a given outcome.

The case I had in mind which motivated this exploration was a player character
which reached 90 HP at the 10th level.
Specifically, the probability distribution of interest is
`\begin{align*} \Prob{S \in 1\dd12 + 8\dd10 + 32} &= \Prob{S' \in 1\dd12 + 8\dd10} \end{align*}`

where the character is a level 2/8 split multiclassed Barbarian/Fighter and the
bonus `\(12 + 20 = 32\)`

comes from the first level default (Barbarian maximum `\(\dd12\)`

)
and Constitution (CON +2).
The right-hand side implies that `\(S' = S - 32\)`

; we can ignore the bonus if we
appropriately rescale the value of interest because the bonus only shifts the
indices of the probability distribution without changing its shape.

Just as we did for 3 dice, we use the tools we have developed to build up to the desired probability distribution for the 9 dice. Then as a very basic statistic, we can consider the integrated probability below and above the actual outcome to get a sense of how (un)likely the observed case is:

```
P1d10 = prob_dN(10)
P1d12 = prob_dN(12)
# build up to 8d10
P2d10 = conv(P1d10, P1d10)
P4d10 = conv(P2d10, P2d10)
P8d10 = conv(P4d10, P4d10)
# combined 1d12 + 8d10
P1d12_8d10 = conv(P1d12, P8d10)
# integrated probabilities above and below the observed roll of 58
P_gtr58 = sum(P1d12_8d10[59:end])
P_leq58 = sum(P1d12_8d10[begin:58])
```

We calculate that the actual observed total of `\(58\)`

in `\(9\)`

dice rolls is
within the 80th percentile of the distribution, leaving almost 20% possibility
of actually getting a higher value.

`\begin{align*} \Prob{S > 58 \in 1\dd12 + 8\dd10} &= 18.54\% \\ \Prob{S \le 58 \in 1\dd12 + 8\dd10} &= 81.46\% \end{align*}`

From this, we can assess that this outcome is not unlikely.^{1}

Finally, for completeness, we also plot the distribution, shown in Fig. 5.1
with the vertical orange line indicating the actual observed result.
`\(\Prob{S > 58 \in 1\dd12 + 8\dd10}\)`

corresponds to the sum of the values to the
right of (greater than) the orange line, and `\(\Prob{S \le 58 \in 1\dd12 + 8\dd10}\)`

is the sum of the values to the left of (less than) or equal to the orange line.

In the following appendix, I will explore a slightly simpler way to calculate the joint probability distribution over many die rolls, but doing so will require some additional mathematical tools.

Feel free to end here if the above discussion satisfies your needs.

## Fourier Convolution Theorem¶

As a physicist, the first extension that comes to mind after encountering a
convolution is the
Fourier Convolution Theorem
wherein we move from “real” space to Fourier space and can replace the convolution
with pointwise multiplication,
`\begin{align} \newcommand{\FT}{\mathcal{F}} \newcommand{\iFT}{\mathcal{F}^{-1}} \FT[f * g] &= \FT[f] \cdot \FT[g] \\ f * g &= \iFT\left\{ \FT[f] \cdot \FT[g] \right\} \label{eqn:fftconvthm} \end{align}`

where `\(\FT\)`

is the forward Fourier transform and `\(\iFT\)`

is its inverse.

In our case where we have discrete data over a finite domain, we can forego
requiring a full Fourier transform over the entire real line and can instead
work with the fast Fourier transform (FFT).
We will use the popular, industry-standard FFTW library, provided by
`FFTW.jl`

.

Using Eqn. \ref{eqn:fftconvthm}, we can replace the function `conv_prob`

from Section IV with a much shorter implementation:^{2}

```
using FFTW
function fft_prob(P₁::AbstractVector, P₂::AbstractVector)
ax₁ = axes(P₁, 1)
ax₂ = axes(P₂, 1)
N = last(ax₁) + last(ax₂)
# The input to an FFT must always have a first (logical) index of 0, and
# the output axes are the same as the input axes — copy the inputs to
# arrays with the desired output shape.
P₁′ = zeros(eltype(P₁), 0:N)
P₂′ = zeros(eltype(P₂), 0:N)
P₁′[ax₁] = P₁
P₂′[ax₂] = P₂
# Eqn 9, using "real" FFT for efficiency.
Pc = irfft(rfft(parent(P₁′)) .* rfft(parent(P₂′)), N + 1)
return OffsetArray(Pc, 0:N)
end
```

```
P2d6′ = fft_prob(P1d6, P1d6) # convolved 2d6 distribution
P2d6_1d4′ = fft_prob(P2d6′, P1d4) # convolved 2d6 + 1d4 distribution
```

Plotting the result results in the the same distribution:

One of the big advantages of switching from convolution to multiplication is
that the joint probability of multiple dice rolls of the same die is much
easier to calculate — instead of performing multiple convolutions, we perform
multiple multiplications.
Repeated multiplication is just exponentiation, so:
`\begin{align*} \underbrace{f * f * \ldots * f}_{M\text{ times}} = \iFT\left\{ (\FT[f]) ^ M \right\} \end{align*}`

This lets us skip directly to any probability `\(\Prob{S \in M \dd N}\)`

involving `\(M\)`

rolls of a `\(\dd N\)`

die, which can then be combined (if necessary)
with rolls of a different die using convolution.
For instance, compare a new `prob_MdN(M, N)`

function which does this to the
previous definition of `prob_dN(N)`

which gave the starting distribution for a
single die:

```
# Returns uniform distribution 1/N on the range of values 1 through N
prob_dN(N::Integer) = fill!(zeros(1:N), 1/N)
# Returns the joint distribution for M rolls of a dN die
function prob_MdN(M::Integer, N::Integer)
K = M * N # maximum total
# starting die probability distribution
P = zeros(0:K)
P[1:N] .= 1 / N
# transform to Fourier space and "convolve" M times
F = rfft(parent(P)) .^ M
# inverse transform
return OffsetArray(irfft(F, K + 1), 0:K)
end
```

In the conclusions above, we had to iteratively construct the probability
distribution for `\(8\dd10\)`

; with `prob_MdN`

, we can instead construct it
directly with a single call:

```
P8d10 = prob_MdN(8, 10)
```

This makes constructing the total probability
`\(\Prob{S \in 1\dd12 + 8\dd10}\)`

much easier to write out than before:

```
P1d12_8d10′ = fft_prob(prob_MdN(1, 12), prob_MdN(8, 10))
# integrated probabilities above and below the observed roll of 58
P_gtr58′ = sum(P1d12_8d10′[59:end])
P_leq58′ = sum(P1d12_8d10′[begin:58])
```

`\begin{align*} \Prob{S > 58 \in 1\dd12 + 8\dd10} &= 18.54\% \\ \Prob{S \le 58 \in 1\dd12 + 8\dd10} &= 81.46\% \end{align*}`

As a note on efficiency, this example performs several unnecessary (inverse)
Fourier transforms at the input and output of each function call to
`prob_MdN()`

and `fft_prob()`

.
With sufficiently large working vectors, many dice rolls can be combined in
frequency space, deferring the only inverse Fourier transform to the end
when the final probability distribution is required.

There are additional Fourier properties that we can utilize to build up a
generalized framework for creating the distribution over any combination of
dice rolls.
For instance, there can be penalty rolls (like that from
Bane) which *subtract* values rather
than add them.
Such rolls combine via convolution as well — the non-zero probability is just
defined over negative indices rather than positive, and there is a clear way
to express functions at negative indices if we are careful to consider the
periodicity of the discrete Fourier transform when combined with the length of
the vector.

Nicely, Julia’s type system paired with its
numerical coefficient juxtaposition syntax
allows us to actually very conveniently write dice roll expressions.
The `DiceRolling`

module in the attached `die_rolls.jl`

implements a small type hierarchy which defines basic arthmetic over dice and
integers that builds up simple dice roll expressions.
(The custom types also have custom printing defined to have expression
print as we expect in D&D.)

```
include("die_rolls.jl")
using .DiceRolling
dice = (12 + 1d12) + 8d10 + 20
display(typeof(dice))
display(dice)
```

```
Main.DiceRolling.CompoundDieRolls
8d10 + 1d12 + 32
```

We can then calculate the total probability distribution for any expression
using the provided `DiceRolling.distribution`

function.
Returning to the same HP example we’ve used before, this time we can actually include
the bonuses which have been fully implemented, and we again calculate the same
`\(\sim 18\%\)`

probability to exceed.
This time, though, expressing the calculation has become almost as easy as
reading the mathematical formula:

```
sum(distribution(dice)[91:end])
```

```
0.1853775833333332
```

Similarly, a simple `Plots.jl`

recipe
makes plotting the probability distribution automatic within the
`Plots.jl`

framework.
(The recipe automatically selects the stem plots used throughout this article,
sets the series label to the dice roll composition, and labels the axes.)

```
using Plots
plot(dice)
vline!([90], label = "HP roll = 90")
```

Change Log: ^{3}

For another viewpoint: if we assume the distribution is Gaussian — which the distribution is slowly converging to according to the Central Limit Theorem — then we can convert the 18.5% probability to exceed to instead state that the value

`\(58\)`

is just under a`\(1\sigma\)`

high fluctuation: the symmetric central enclosed area is`\(\sim 63\%\)`

, compared to the`\(\sim 68\%\)`

central area within the first standard deviation of a Gaussian distribution. ↩︎We’ve been wrapping data in

`OffsetArrays`

so that they carry around information of their own indices (i.e. not required to be 1-indexed). In the previous sections, there was no harm to doing this, but here we see the abstraction break down because the inputs to`FFTW`

’s functions must be regular Julia arrays. I have elected to keep using offset arrays for clarity in indexing, but doing so requires that we must explicitly (temporarily) unwrap the offset arrays with calls to`parent()`

to gain access to the underlying plain Julia arrays. This does create some unnecessary overhead, but we accept that for pedological reasons. ↩︎*Changed 2020 Nov 30:*The mathematical notation of the probability distributions has been updated from being written`\(\Prob{S \given \dd N}\)`

to`\(\Prob{S \in \dd N}\)`

to better align with the typical notation where a vertical pipe indicates the conditional probability distribution. Here, it was intended to just explicitly specify the parent distribution for any given random variable. ↩︎