wired = W = wirewater

wirehead /wi:r'hed/ n.

[prob. from SF slang for an electrical-brain-stimulation addict] 1. A hardware hacker, especially one who concentrates on communications hardware. 2. An expert in local-area networks. A wirehead can be a network software wizard too, but will always have the ability to deal with network hardware, down to the smallest component. Wireheads are known for their ability to lash up an Ethernet terminator from spare resistors, for example.

--The Jargon File version 4.3.1, ed. ESR, autonoded by rescdsk.

In science fiction, wireheads are people who are addicted to electrical stimulation of their pleasure centers. The term appears to have been coined by Larry Niven in 1973, when he used them in his Known Space stories (these include his Ringworld novels).

Wireheading has appeared in various stories throughout the years. It is a fairly compelling idea; in theory it is the most addicting experience possible to corporeal humans -- something that has been supported in medical experiments. Most of these experiments have been on animals, particularly small rodents, cats, and monkeys, but Dr. José Manuel Rodriguez Delgado, a Spanish professor of physiology at Yale University, did pleasure center stimulation experiments on psychiatric patients, and did indeed confirm that the stimulation of these areas proved extremely addictive, with patients who were given control of the electrodes in their brains stimulating themselves incessantly and begging to be allowed to continue when the session ended.

Wireheading is currently used in a metaphorical sense when talking about futurism and the singularity; one possible outcome of a superintelligent AI being allowed to have control over humanity would be for it to encourage rampant wireheading. After all, if you are given a directive to maximize human pleasure or happiness, this would be the way to do it -- whether by literal electrodes in the brain, or by VR scenarios that maximize serotonin and dopamine output.

Likewise, it is possible that a superintelligent AI might wirehead itself. If it were designed to be a reinforcement learner -- that is, gain motivation or reward for successes -- and it were smart enough to hack the reinforcement system, it might then run amok, producing results that make no sense to us but constantly reward it. For example, if an AI was given the directive to maximize economic productivity, it might develop an unbounded derivatives market, increasing economic productivity (on paper, anyway) by speculating with itself on future investments infinitely into the future.

Perhaps the most interesting feature of wireheading is that most of us are not attracted by it, and would not chose to do it if we could -- despite some good evidence that if we did try it, we would enjoy it immensely. Trying to formally program this counter-intuitive behavior into our computer systems may become very important to us in the not-to-distant future.

Log in or register to write something here or to contact authors.