A friend — who is, incidentally, a lot better than me at the whole blogging thing — recently asked me about the tdistribution and what it’s used for. I ended up writing a fairly lengthy email in response and thought I might as well share it with the rest of the Internet.
Imagine the following setting: we have a sample of independent normally distributed variables, all with mean and variance . Suppose is known and we want to test whether some value is a feasible guess for .
H0:
H1:
The way we would typically go about this is by calculating the sample mean and saying that under H0 it follows a distribution. This means that under H0 the test statistic
follows a standard normal distribution and we can proceed with calculating the pvalue of this test. Easy!
What happens if we don’t know ? It is tempting to use the (unbiased) sample variance
to estimate and substitute in the statistic above.
Assuming we can compute an approximate pvalue that should be fine… right?
Let’s forget about the algebra of it for a second. In the first case, we had some uncertainty about the mean and no uncertainty whatsoever about the variance. What the hypothesis test does is it checks whether the uncertainty that we have is enough to explain the difference between and . Not knowing the variance gives us some extra uncertainty on top of what we had before. The more uncertain we are of what we know, the more tolerant we should be to reality not matching our expectations.
Intuitively at least, the approximate pvalue above is going to be somewhat conservative, since we’re not accounting for the variability of the sample variance. However, we still expect to behave more or less like a standard normal — especially if our sample size is large, in which case we’re very confident about our estimate for . We can make several guesses about the density of based on this idea alone:

it looks more or less like a standard normal (i.e. like a bell curve)

it has wider tails

it’s not fixed, but depends on : the more data we have, the better our normal approximation
I do love informal approaches, but the academic community doesn’t necessarily share my enthusiasm. In any case we can all agree that knowing the exact pvalue is preferable to making an approximation. This is where the tdistribution kicks in.
Definition. Let and be two independent random variables. The tdistribution with k degrees of freedom is defined to be the distribution of .
While you can write the density function explicitly, it’s the form above that is the useful one. I won’t go through the algebra of it, but using it you can check that . This means we can compute exact pvalues for the hypothesis test (or rather we can let R compute them for us). This is what Student’s ttest amounts to!
Now for the really cool part. We can plot the standard normal and several tdistributions with varying degrees of freedom. Here’s what happens:
Our intuition was pretty spot on!