The Q function measures the one sided tail area of the normalized error function erf(*x*). This is useful in several areas of statistics. Digital communications engineers use the Q function to compute probability of bit error given a certain signal to noise ratio for different modulation techniques.

Q(*x*) = int(LL=*x*, UL=∞, (1/√(2*π))⋅exp{-*z*^{2}/2)⋅d*z*)

The Hewlett-Packard scientific calculator, the HP 32SII, may be the last handheld calculator that uses Reverse Polish Notation, which is a 4-element stack level that allows intermediate values of a complex calculation to be stored in a stack. A dyadic operation like addition, subtraction, *y*^{x} operates on the immediately visible value, *x* and the stack value *y*. The result is stored in *x*, and the entire stack drops down, with the last value of the stack being replicated in the top level. A truly ingenious concept, and incredibly efficient from a programming point of view. RPN has been my choice of calculator since undergraduate days in engineering. I believe that Hewlett-Packard is the only company to manufacture such calculators. More's the pity.

The 32SII math library has a variety of functions, but it doesn't have an error function, which is incredibly irritating. The calculator's manual contains a program for the Q function, which it calls the inverse normal distribution (p. 16-11), but it involves integration of the Gaussian distribution, and is slow. Fortunately, Abramowitz & Stegun has a marvelous approximation for the error function that bounds error to less than 1.5 x 10^{-7}. I've diddled with it to transform the error function into a Q function for the sake of all you digital communications engineers out there.

erf(*x*) = int(LL=0, UL=*x*, (2/√π)⋅exp{-*z*^{2})⋅d*z*)

The Abramowitz & Stegun approximation is given by the following equations, for *x* > 0.

erf(*x*) ≅ 1 - {*a*_{1}*t*+*a*_{2}*t*^{2}+*a*_{3}*t*^{3}+*a*_{4}*t*^{4}+*a*_{5}*t*^{5}}⋅exp(-*x*^{2})

where

*t* = 1 / (1 + *p**x*)

and the values of the coefficients are:

p = 0.32759 11
*a*_{1} = 0.25482 9592
*a*_{2} = -0.28449 6736
*a*_{3} = 1.42141 3741
*a*_{4} = -1.45315 2027
*a*_{5} = 1.06140 5429

Finally, the Q function's relationship to the error function is given by:

Q(*x*) = 0.5⋅(1-erf(*x*/√(2)))

User defined functions are given labels A, B, C... My particular Q function is labeled E, since I have other functions defined for the prior labels. In the program below, all of the steps are labeled Exx because all of the steps are found under the label E. Also, do NOT place a space in the numbers. I've only added spaces between the 5th and 6th digits for legibility.

E01 LBL E E11 ENTER E21 y^x E31 x⇔y E41 RCL Z
E02 2 E12 ENTER E22 -1.45315 2027 E32 2 E42 x^2
E03 √ E13 ENTER E23 * E33 y^x E43 +/-
E04 ÷ E14 ENTER E24 + E34 -0.28449 6736 E44 e^x
E05 STO Z E15 5 E25 x⇔y E35 * E45 *
E06 0.32759 11 E16 y^x E26 3 E36 + E46 2
E07 * E17 1.06140 5429 E27 y^x E37 x⇔y E47 ÷
E08 1 E18 * E28 1.42141 3741 E38 0.25482 9592 E48 RTN
E09 + E19 x⇔y E29 * E39 *
E10 1/x E20 4 E30 + E40 +

Test values^{1}

Q(0) = 0.5000
Q(0.5) = 0.3085
Q(1) = 0.1587
Q(1.5) = 0.0668
Q(2) = 0.0228
Q(3.49) = 0.0002
**How to use this program**:

- Enter
*x*: 1
- Press XEQ E
^{2}
- Display: 0.1587

The error function approximation is valid only for *x* ≥ 0. If you need to find the Q function for negative x, then use the identity:

Q(*-x*) = 1 - Q(*x*);

**Example**: For *x* = -1, Q(-1) = 1 - Q(1) = 1 - 0.1587 = 0.8413

**
NOTES
**

- Bernard Sklar,
**Digital Communications**, 2^{ND} ed., (c)2001, p. 1045
- The 1/x key

**
REFERENCES
**

- Abramowitz & Stegun,
**Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables**, U.S. National Bureau of Standards, AMS-55, 4th Printing, Dec. 1965, p. 299. Mine's the original red-cloth-covered hardback version. Ha! How do ya like THEM apples?
- Bernard Sklar,
**Digital Communications**, 2^{ND} ed., (c)2001, p. 1045
- Daniel Zwillinger, Ed.,
**CRC Standard Mathematical Tables and Formulae**, 30th Ed., CRC Press, (c)1996, p. 498
- Steven G. Wilson,
**Digital Modulation and Coding**, Prentice-Hall, (c)1995, p. 30
- John G. Proakis,
**Digital Communications**, 3rd Ed., McGraw-Hill, (c) 1995, p. 40
- Athanasois Papoulis & S. Unnikrishna Pillai,
**Probability, Random Variables, and Stochastic Processes**, 4th Ed., (c)2002, p. 106
- Athanasois Papoulis,
**Probability, Random Variables, and Stochastic Processes**, (1st Ed!), (c)1965, p. 64
- Rodger E. Ziemer & Roger L. Peterson,
**Introduction to Digital Communication**, Macmillan, (c)1992, pp. 689-691
- Rodger E. Ziemer & William Tranter,
**Principles of Communications: Systems, Modulation, and Noise**, 4th Ed., Wiley, (c)1995, pp. 781-782
- Bhargava, Haccoun, Matyas, and P. Nuspl,
**Digital Communications by Satellite**, (c)1981, pp. 44-45
- Tri T. Ha,
**Digital Satellite Communications**, Macmillan, (c)1986, p. 401
- Bernard Sklar,
**Digital Communications**, 2nd Ed., PH/PTR, (c)2001, p. 122, table of values: pp. 1044-1045
- Jerome Spanier and Keith B. Oldham,
**An Atlas of Functions**, Hemisphere, (c)1987, p. 385
- George R. Cooper & Clare D. McGillem,
**Modern Communications and Spread Spectrum**, McGraw-Hill, (c)1986, p. 136, Appendix B, p. 423
- J.C. Bic, D. Duponteil, and J.C. Imbreaux,
**Elements of Digital Communication**, Wiley, (c)1991, pp. 574-578. Excellent appendix. This is the first book I ever bought that was over $200, and it's worth every penny. If you're truly serious about digital communication theory, this is a must-have resource. Translated from French.