dilluns, 14 de juliol del 2014

The importance of what it is not said



Scenario #1: the magic coin

Someone flips a coin four times and obtain the following values:
{Head, Tail, Head, Head}
Based on the outcome of the above experiment he reports:
My coin is magic: it has a 3/4 probability of Heads, and 1/4 of Tails.

 

Scenario #2: the voting visionary

Next weekend election will be held in your country. There are two candidates: A, and B. Someone interviews ten people and obtain the following voting intentions:
{A, B, B, B, A, A, A, A, A, B}
Based on the above he reports:
The result of the election will be A: 60%, B: 40%.

Scenario #3: the peak usage

You measure the system usage at the peak hour during three days. The measured values are:
{70%, 70%, 85%}
Based on the above you report:
The average system usage at the peak hour is 75%.


Think

Surely you consider that the man in the coin and the voting scenarios is silly, or utterly ignorant at least, because is making bold predictions based on a very small empirical evidence. However you probably accept with no objections what is telling the man (btw, you!) in the system usage scenario, but…
…THINK!
There is nothing significantly different in the three scenarios. So if there is something wrong with the coin and the voting case, it may be the same for the peak usage case.

 

The confidence interval

When you try to estimate a certain quantity based on a limited number of measurements the reported value is affected of what is called the statistical error. This is inherent to any product of sampling. This statistical error is not an “error” in the common sense of the word, but instead it express the precision or reliability of the reported figure. Instead of having a value, you have an interval, that is, you MUST say:
The value is likely to be between this and that.
The length of this interval corresponds to the imprecision / uncertainty. If the length is small, the precision is high. If the length is large, the precision is low. Clearly it is not the same to say
The head probability is likely to be between 0.2 and 0.8.
a very low precision determination, than to say
The head probability is likely to be between 0.499 and 0.501.
a much higher precision one.

The confidence interval is the official name, in maths or statistics, for such an interval.
The degree of “likelihood” (…is likely to be…) is quantified by what is known as confidence level. Typical values are 90% or 95%. The loose meaning of a 90% confidence level is is that there is a 90% probability that the true value –the one we are trying to determine- lies within the interval (and, consequently, that there is a 10% probability that it falls outside).
The confidence interval is centered around the sample mean, the arithmetic average of the data (the measured values):
Center of the confidence interval =  AVERAGE(data)
where data is the sample, or set of measured data points.

The confidence interval length is typically estimated with the following formula:
Length of the confidence interval =2* T.INV.2T(1-conf, size-1)*STDEV.S(data)/SQRT(size)
where size is the sample size (number of data points), and conf is the desired confidence level, and T.INV.2T(), STDEV.S() and SQRT() are worksheet (Excel) functions. (I’m not focusing here in the details,  I’m paying more attention to its consequences and dependencies than to the formula itself).

This length depends on the following factors: 

  •  the sample size or number of data points: the length decreases, and the precision increases, when the number of data points increases.
  •  the sample variability/dispersion: the length increases, and the precision decreases, when the data is noisy, erratic, highly variable. 
  •  The confidence level: the lenght increases when the confidence level increases. Reason? If you want increased certainty in your report, more “safety” margin is needed.

To have a better idea on this, look at the following table showing the approximate confidence interval length versus the sample size for a variable that can go from 0 to 100, and for a confidence level of 90%. To achieve a precision of 3.5% you need around 1000 measurements!

Size
Length
5
64
10
38
20
25
30
20
100
11
1000
3.5

Let’s go back and revisit our scenarios, but now equipped with the above ideas and guidelines.

Scenario #1: the magic coin (REVISITED)

Someone flips a coin four times and obtain the following values:
{Head, Tail, Head, Head}
Based on the outcome of the above experiment it MUST be reported:
My coin has a head probability between 0.3 and 1 with a confidence level of 90%

If you want to increase the precision, that is, reduce the confidence interval length, you must increase the sample size, that is, the number of flips. With 100 flips you are going to obtain something like:
My coin has a probability of heads between 0.45 and 0.55 with a confidence level of 90%

Scenario #2: the voting visionary (REVISITED)

Next weekend election will be held in your country. There are two candidates: A, and B. Someone interviews ten people and obtain the following voting intentions:
{A, B, B, B, A, A, A, A, A, B}
Based on the above it MUST be reported:
The result of the election with a confidence level of 95% will be:
 A between 30% and 90% and B between 10% and 70%.

This has changed a lot from the initial bold prediction; it is much blurred now. Typical opinion polls to estimate the true percentage of vote with reasonable precision and a confidence level of 95% require a  sample size of around 1000 people. Have a look on the small letter next to the results when you see such a study in your newspaper.

 

Scenario #3: the usage peak (REVISITED)

You measure the system usage at the peak hour during three days. The measured values are:
{70%, 70%, 85%}
Based on the above you MUST report:
The average system usage at the peak hour is between 65% and 85% with a confidence level of 90%.

If you want to increase the precision, that is, reduce the confidence interval length, you must increase the sample size, that is, the number of dayly measurement. With 20 days you are going to obtain something like:
The average system usage at the peak hour is between 74% and 78% with a confidence level of 90%.


Things to remember


  • There is an unavoidable uncertainty in your measurements and calculations.
  • Any estimation based on sampled data or limited number of measurements must/should be accompanied with the precision/uncertainty
  • Avoid too small samples. The sample size should be large enough to obtain a reasonable precision.
  • What it is not said, the error, is usually as important as the reported value itself.
  • There are many marketing tricks that do not tell the whole story and deliberately hide the estimation error. Do not act like a malicious or ignorant people creating those marketing messages.


In the next contribution I’ll have a closer look to the system usage case. Stay tuned.

Cap comentari:

Publica un comentari a l'entrada