If we wish actually to use the Neyman-Pearson Lemma, we immediately run into the dilemma described under Bayes' theorem -- we don't know P(H | O). This is the probability that our hypothesis is true, given the observations. However, by using that inversion formula, we can obtain a more tractable decision procedure.
In most useful cases, we'll use Bayes' theorem to help us estimate P(H1 | O) and P(H0 | O) (the probabilities that the hypothesis and the null hypothesis are true, respectively, given our observations). Our mathematical models only tell us how likely the observations are, given that the model is true; that is, we know P(O | H) for various hypotheses H. But Bayes tells us that
P(H)
P(H | O) = P(O | H) ⋅ ----,
P(O)
and we want to look at P(H
1 | O) / P(H
0 | O), as per the
Neyman-Pearson lemma.
Expand both conditional probabilities using Bayes; after
cancelling out P(O) which appears in both, we can "swallow" the remaining ratio P(H1) / P(H0) into the decision
parameter t which
anyhow appears in the Neyman-Pearson lemma, leading to a
decision procedure which asks if
P(O | H1)
--------- > t'
P(O | H0)
where, again, t' controls the ratio of
false negatives to
false positives. Since the
scaling we applied to t to get t' is unknown, a good value for t' must be determined by other means (usually
empirically). Unlike t, t' is an
empirical parameter. But in practice, this is overcome quite easily.