OK.. before i explain this, let's review the basic general idea behind
RISC, Reduced Instruction Set Computing.
- Microchips with smaller instruction sets can have simpler and thus generally more efficient chip designs.
- Complex instructions are unnecessary, since such instructions can generally be broken down into combining a series of smaller, simpler instructions.
RISC chips attempt to reduce the number of
processor instructions down to a mere handful of general instructions easily combinable to quickly serve more complex tasks. But think: because of turing equivilency, even after you've thrown out the obviously unnecessary instructions a few of the handful left over will always still be, in a technical sense, redundant. For practical reasons, RISC chips still do not have a truly
minimal set; logically it would always be possible to reduce the set even further, finding an instruction somewhere in what few you still have that can be approximated and replaced by combining other instructions in certain ways.
So it came to pass that, in the tradition of such fine programming languages as INTERCAL and unlambda, a few computing visionaries were sitting around when the idea struck for the creation of an OISC:
One Instruction Set Computer.
Just think of the benefits! Clumsy lookup tables can be eliminated. Opcodes become completely unnecessary, meaning both that you conserve space (a pyrrhic victory, to be sure, since any task will take many, many more instructions under OISC) and allow the OISC to simply snarf through its registers without stopping to check what it's doing. Pipelining can be taken to a truly ludicrous extreme. (On the other hand, of course, practically all known forms of internal processor optimisation, for example VLIW, become roughly impossible. Meaning the creator of any OISC chip would have to completely reinvent the wheel as to how the chip operates. But who's to say the new wheel wouldn't be faster?)
The OISC's single instruction is "Subtract and branch if negative". It takes three registers at a go, and does this:
If the current register is "."
Write into the register specified by the contents of .
the current contents of the register specified by . minus the contents of the register specified by the contents of register .+1
and if the result is negative,
jump to the register specified by the contents of register .+2.
For obvious reasons, no OISC chips have ever actually been manufactured; however you can find an excellent interpreter/emulator of one (along with all known OISC software) at the retrocomputing museum's archive, downloadable at ftp://locke.ccil.org/retro/oisc.shar.gz
RISC designs tend to make things as simple and easy to optimise as possible for those creating compilers while making the life of assembly coders much more difficult. OISC, as the logical extreme of RISC, takes this concept to an equally far logical extreme. Writing OISC assembly can be hellish, and is pretty much impossible without the use of self-modifying code. This would not be a problem-- since the entire point of OISC is to put all burden on the compiler-- except that unfortunately, perhaps as a result of the lack of any OISC computers in existence, there are currently no compilers capable of writing to OISC code. The creators of the OISC interpreter have gotten around this in the few OISC programs they have written by use of assembler macros, which allow you to predefine inline function-like blocks for commonly done operations such as addition and subtraction, as well as use the . method of specifying register addresses in relation to the current address instead of absolutely.
(Notice one odd benefit of OISC is that, since it's basically impossible to figure out before runtime which instructions are going to be Jumps, writing an OISC virus would be almost impossible by current methods!)
Here is a nicely commented example of a simple (11-register) OISC program, taken from the Read Me of the shar linked above. It reads in a number and then prints it back out. (Note
the use of the emulator's three "special" registers at 32765-32767, respectively named WRITE, READ and STOP; subtract from 32765 (which is always zero) and the result is printed to screen, read from 32766 and the result is taken from keyboard input, branch to 32767 and the program terminates.)
9 32766 3 ; Address 0: Read a number, subtract it from
; address 9. Go to 3 if negative.
32765 9 6 ; Address 3: Print address 9 out. Go to
; 6 if negative
10 11 32767 ; Address 6: Subtract conts of address 11 from
; address 10 Go to STOP if negative (which it
; will, since address 11 contains 0, and
; address 10 contains -1).
0 ; Address 9 - storage for the number
-1 ; Address 10 - negative one for unconditional branch
0 ; Address 11 - zero for unconditional branch
Keep in mind that while as of now all this is good for is obfuscated party tricks, in pure logic terms this really is by no means a turing tarpit. Were someone to write a real compiler for the OISC and a slightly more full-featured emulator (say, one capable of inputting or outputting characters, or maybe even with some form of video memory), there is no reason an OISC could not run, say, linux, just as easily as an x86. The only difficulty would be in getting the compiler to optimise well. Turing completeness is neat!