A simple geometric tool for exploring how well hypothesis tests can discriminate between two competing hypotheses, H and not H, based on information or results of a diagnostic test T . The diagnostic (potentially informative) signal T is binary valued, either positive for (favourable to) or negative for (unfavourable to) the truth of H.
Note: To enable and use the dynamic features of the figure in your browser you will need to download and install Wolfram’s free computable document format CDF player – it is an app for running dynamic Mathematica programs called CDFs , and this WordPress page you are on enables CDFs to be run from within your browser: click here obtain it from wolfram’s site Installation is straightforward ..but you will have to provide a small amount of information to Wolfram – effectively “registering” that you have downloaded their CDF player software. Wolfram, the makers of Mathematica, have been around as long as Apple and Adobe , much longer than Google, and…are as trustworthy; moreover, once you have downloaded the CDF player there are tens of thousands of fascinating and academically useful user-created Wolfram Demonstration Project programs available to run on it. …. so don’t be put off by the short registration requirement.
[WolframCDF source=”https://strategicecon.com/wp-content/uploads/2013/04/vod-hu-v3.cdf” CDFwidth=”655″ CDFheight=”505″ altimage=”https://strategicecon.com/wp-content/uploads/2013/04/vod-hu-v3.png”]
H is an indicator variable equal to 1 if H is true and equal to 0 if H is false, ie not H is true. T is an indicator variable , taking the value 1 when there is evidence favorable to the truth of H , and the value 0 for evidence unfavourable to the truth of H. Think of H as a “state” variable with possible values 0 or 1, and T as a “signal” for the state variable, a diagnostic test.
The level of the vertical height of each curve indicates, for each given prior probability on hypothesis H being true, a posterior probability P(H is true|T=1) -in red – and P(H is true|T=0) – in blue – for each of the two possible values of the test result, T=1 or T=0. Posterior probabilities , the chances of a hypothesis being true AFTER observing a test result T=1 or after observing T=0, will, if the test is in anyway informative, change beliefs, either increasing or decreasing the chances of the hypothesis being true compared to the situation where the testing isn’t done , P(H). The vertical difference between the coloured curves , P(H is true|T=1) – P(H is true|T=0) , is a measure of the amount of information, if any, in the tests
The calculated posterior probabilities are based on three other pieces of information that are manipulable parameters:
- the “size” of the test, the false negative error probability P(T=0| H is true) conditional on H being true
- the “power” of the test , P(T=0| H is false), 1 minus the false positive error probability P(T=1| H is false) conditional on H being false
- the unconditional or prior probability P(H) of the hypothesis being true
. The size and the power of the test can be independently set using the sliders on the left – or click the small + at the end of the slider to enter a numerical value. The prior probability on hypothesis H also has a slider setting. For comparison purposes a benchmark (BM) setting for these variables is also possible, in the lower left hand panel. The two different test results the ability of test results to discriminate between H and not H
Gigerenzer style natural frequencies are calculated in the accompanying truth table – the simple integer arithmetic of “counts” out of stylized sample sizes n=100,1000, 10000 is helpful in “framing” the inverse inference task by making it easy to identify type 1 errors (false negative rates conditional on H=1) and type 2 errors (false positive rates conditional on H=0).