A large part of the statistics courses I took during my undergraduate degree involved parametric modelling. There we would often use “noise”, “white noise”, and “Gaussian noise” to mean the same thing — independent normally distributed random variables with mean 0, which (hopefully) account for the difference between the theoretical model and the observed data.
I’ve come across this particular piece of jargon again and again, and it almost always refers specifically to the normal distribution. I hadn’t really thought much of that: “noise” and “error” both refer to disturbances in the “true” signal, and we tend to think of errors as normally distributed by default. None of my undergraduate work ever involved noise in the colloquial sense, so I had no idea whether this reflected in any way real noise… until a few days ago.
I’ve been working on a short research project at IBME for the past few weeks. My work involves analysing voice recordings made in uncontrolled environments (mostly just people’s homes), and the data I have also includes short recordings of background noise. I haven’t worked much with those yet, but I did plot a histogram of one I picked at random. Here’s what it looks like!
Pretty neat, huh? I expected it to look symmetric, etc., but I’m pretty sure this is the best fit of real data to a normal distribution I’ve ever seen.