Some introductory words

Yesterday (2001-08-14) I talked with ariels about the node Learn to count from one to ten in different programming languages, that was recently re-created because the programming language writeups in learn to count from one to ten in different languages got nuked. While I agreed that comparison of languages in general is only a good thing, I found the approach shown there very, very awkward - more of an invitation for us fellow noders to add a simple program in Our Favorite Language.

Also, this was pointless - counting linearily, algorithmically speaking, is a very simple thing, and there's thousands of different languages. GTKYN guaranteed.

I couldn't find either of those nodes today. Were they nuked? Well, that doesn't matter...


Here, I'm trying something more sensible. It actually took me some days of research and coding to finish this. Most of the code here were specifically written for this document. Weirdly enough, I had not implemented this algorithm in Perl before, even when Perl is my favorite programming language ever!

The following thing tries to explain differences between different types of programming languages, and also, more specifically, different programming styles. It would be possible to write procedural version of this program in Python, or an object-oriented version in Forth (scary thought, isn't it?)

The example programs shown here are implementations of my Stutter/Feedback Algorithm, just about the only algorithm I've ever invented and then described in this sort of detail. (I consider myself sane... well, somewhat strange, but "sane" in clinical sense of the word. =) While it's still not much more complicated than the counting algorithm, it's still complicated enough to be actually interesting enough to be implemented in different ways.

This writeup does recognize the "macrofamilies" of computer languages in the scientifical sense, but doesn't try to follow them strictly. I use pseudo-scientific labels like "procedural languages", and such, but read some computer science books to get more accurate, OK? For example, some of the categories listed here would probably not pass muster... Is stack-based stuff really a group of its own, or just a subgroup of procedural languages, as I expect them to be? As sure as hell they look vastly different, though...

Also, this doesn't cover every type of programming language, just the most common ones.

Procedural languages

(...aka Imperative languages...)

Examples: C, Perl, Rexx,

Example program: (In Rexx)

parse arg infile
if infile\='' then
        do while lines(infile)>0
                call stutter(linein(infile))
                end
else
        say 'Usage: stutterize.rexx filename'
        exit

exit
        
stutter: procedure
parse arg line
string = ''
do i = 1 to length(line)
        string = string || substr(line,i,3);
        end
say string
return

(I was about to do this in Fortran 77, just to show that We Aren't Afraid, but damn me how big wuss I am. I ran like the coward I am when I saw Fortran's string handling... even reading stuff from stdin to variables was a challenge too mighty for me. Still, I am going to write the hideous example in F77 some day... =)

Example program: (In Perl)

#!/usr/bin/perl -w

use strict;

my ($input);

while($input = <>) {
  chomp $input;
  for(my $n = 0; $n <= length($input)-3; $n++) {
    print substr($input,$n,3);
  }
  print "\n";
}

Procedural languages approach the problem by going in, doing the Magic, and getting out. Program starts, program ends, stuff happens in the middle. That's the idea. That's how the things are done, always. That's how it happens. The idea is, simply, to use algorithm. While most functional languages utilize recursion a lot, procedural languages typically solve the problems using iteration - in the Perl program above, we do this:

"Take next line from input source (stdin or input file, depending on invocation), remove trailing carriage return, and loop over each character of the string, stopping at the 3rd to last. Print out each 3-character substring of the string, starting from place indicated by our current loop counter value."

Most procedural languages look surprisingly much like C today, probably because C's syntax is fairly neat and it's a good language that everyone should learn. I used Rexx, a fairly old but still kickin' language, as my non-ordinary example - Rexx is good for illustration of algorithms, and I guess I'm not making a big explanation here on how it works, it's mostly self-evident. Perl is a good example of a "real-world" language that has been influenced by C - the for( ... ; ... ; ...) ... loop has been stlen from there.

Specific-purpose scripting languages

Examples: Found in better applications everywhere!

Example program: (In TinyFugue scripting language)

/def stutter = \
   /let mystring=%{*} %; /let out= %; /let ctr=0 %;\
   /while ({ctr}<=(strlen({mystring})-3)) \
     /let out=$[strcat({out},substr({mystring},{ctr},3))] %;\
     /let ctr=$[{ctr}+1] %;\
   /done %;\
   "%{out}

Here's a good example of a specific-purpose procedural programming language. (Another good example would be ECMAScript, but that one is too much like C so repeat isn't good...) TinyFugue interpreter can only be found in TinyFugue. However, as everyone can see, the thing borrows a thing or two from procedural languages (control structures, subroutine/macro definitions), something from shell scripts (%{*} is equivalent of Bourne shell $*, for example), something from C (strlen()), and most of all, it inherits the client's command syntax (all "normal commands", not functions, start with slash). This sort of scripting can be done in many apps. For example, some IRC clients (that are still backwards enough not to support Perl) use rather similiar syntax - but in many, the slashes can be left out.

Functional languages

Examples: Lisp, Scheme, Haskell

Example program: (In Lisp)

(defun stutter (string)
  (cond
   ((<= (length string) 2) string)
   (t (concatenate 'string
		   (subseq string 0 3)
		   (stutter (subseq string 1))))))

The idea of functional languages is to specify program execution as a self-contained function, hence the name. The program takes parameters and gives a return value. The "side effects" (modifying some state somewhere along the way) are discouraged. Of course, sometimes it makes sense to do something just for the side effect: For example, functional program may use "print" command not for its return value (usually the string argument itself unmodified), but for its side effect (printing argument to screen).

Lisp, as depicted above, is less "mathematical" than Haskell (and I chose it just because I already knew some Lisp, but I couldn't learn Haskell in a matter of moments. Impatience rocks...). However, as you can see, as a pseudocode the program would look like this:

Stuttered version of strings of less than 3 letters is string itself (special case)

Stuttering == first three letters of string, followed by stuttering of string from the second letter onward

The idea is simple: we use recursion. The program is defined as a recursive function that only passes stuff as function parameters and only returns stuff - doesn't mess with variables.

Object-oriented languages

Examples: C++, Java, Objective C, Smalltalk, Eiffel

Example program: (In Python)

import sys, string

class Stutterizer:
    __processing = 0
    def stutter(self,x):
        __processing = 1
        stuttered = ""
        n = 0
        while n <= len(x)-3:
            stuttered = stuttered + x[n:n+3]
            n = n+1
        __processing = 0
        return stuttered
    def isProcessing(self):
        return __processing

stut = Stutterizer()
s = "\n"
while s != '':
    s = string.rstrip(sys.stdin.readline())
    print stut.stutter(s)

There's two things I learned while writing this program: First, the claim that Object-Oriented metaphor is overkill for small problems is completely true.

Secondly, Pythonites' claim that Perl's object orientation sucks is in grade "pot, kettle, black." True, Python has "class" reserved word while Perl uses mystical rituals of blessed references, but it won't help if all class members are public and methods virtual - that's just as bad as the situation is in Perl! (Or has this changed in Python 2? I'm sure Perl 6 will address these issues too...) (Update, 2005-04-01: Please just disregard this mindless, mindless offtopic babbling. Sorry if I struck the nerve of Python fans - there's one language I have had severe trouble "getting". If I get amused enough, I may rewrite this example in Ruby, I have no complaints about that language =)

Okay, enough rambling. Above, you can see that I have a class called Stutterizer that has two methods: stutter (which returns the stutterized string) and isProcessing (which returns the "private" member's number). The __processing private member just holds the information whether or not we're processing - this in attempt to make the thing thread-safe, but this is overkill because there's nothing to be thread-scared of in this class.

The actual meat of the definition - the stutter() function - is not remarkably different from Your Average Procedural Solution. The difference here is that we use encapsulation to hide state variables, and provide "object"-like concept. We have a thingy that creates weird noises when you feed stuff in it.

"Create a new Stutterizer object. Read lines from standard input until you hit EOF, using stutter() method of our Stutterizer object to process strings and printing out results."

Low-level languages

(Here, "low-level" means roughly "not really that easy for trained monkeys to grasp at first, but really schweet if you happen to be a computer".)

Stack-based languages

Examples: Forth, MUF, PostScript

Example program: (In Forth)

: stutter { s -- }
  s bounds swap 3 - swap ?do
      i c@ emit i 1 + c@ emit i 2 + c@ emit
  loop
;

Most users of procedural languages who have never seen languages like this will probably now stand there uttering "Oh, what the hell is that?" and "You was right, there are more unreadable languages than Perl". =)

Stack-based languages are, like the name implies, based on stack. As such, they

  1. Take surprisingly little to run (many really small embedded systems use Forth, for example), and
  2. Are hard to understand for humans but really friendly for computers.

For example, math in Forth follows RPN pretty closely. "2 3 +" will put "5" to top of the execution stack. One might argue that designers of Forth were just too damn lazy to convert stuff from postfix to infix, but... hey, it's just machine/compiler efficiency!

Above explained:

"Define new word (procedure) 'stutter'. It takes one argument which we store to local variable s. This word will not return anything. Take string, and figure out its boundaries in the memory, leaving start and end addresses to the top of the stack. Now, swap the addresses, put number 3 on the top of the stack, and subtract the top number of stack from second to topmost, leaving that to the top of the stack. (Translation: Subtract 3 from the stack top number, that is, the end address of the string s.) Now, swap the top two numbers again. (Translation of the beginning part of this definition: We take memory range and subtract 3 from the end address, leaving them on same order on the top of the stack.) Using the topmost two numbers as numeric memory address range, do the following: take next numeric address (represented by variable i), fetch the character (ASCII values!) at this location, and put it to the top of the stack. Interpret this ASCII value from top of the stack as a character and send it to standard output (emit). Repeat this twice, but adding 1 and 2 to the address i before fetching the value. Now, Okay, repeat this for the whole address range. This concludes our new word."

All that for a couple of symbols. Now, I hope you see why these programming languages, without proper documentation of the code, tend to become write-only.

You can use this program by getting gforth, load the interpreter up by saying gforth stutter.fs and say s" Something here" stutter cr to see the effect.

Assembly

Examples: Glorious RISC Code, Inferior CISC Shit

Example program: (In 6502 assembly, for Commodore 64, compileable with xa)

#include <xa/c64.s>
	
	.word $C000
*=$C000

MAIN:	LDA #<text		; Store start address to pointer P1
	STA P1			;  on zero page (address $00fb)
	LDA #>text
	STA P1+1
	LDX #$00

LOOP:	LDY #$00		; Unrolled for clarity and speed
	LDA (P1),Y
	JSR CHROUT
	INY
	LDA (P1),Y
	JSR CHROUT
	INY
	LDA (P1),Y
	JSR CHROUT
	INC P1
	INX
	CPX tlen
	BNE LOOP
	LDA #13		; Print carriage return.
	JSR CHROUT
	RTS

text:	.byte "tHIS IS A TEST OF THE eMERGENCY bROADCAST sYSTEM."
tlen:	.byte 47		; Length of the text.

Sorry, I was too tired to write an input/output system, and this code isn't exactly elegant. But it works, and it's fast...

Here, we descend deeper, deeper, deeper in the realm of the Processor. What you are looking at now is a good symbolic description of the real code the processor can execute.

Here's a description of the program:

"First, the start address of the text is stored to memory, addresses 00FB and 00FC (collectively known with symbolic names P1 and P1+1, but the latter isn't important from our point of view). The character counter, stored in processor register X, is reset. In the loop, we first reset our character pointer Y to 0. Then, we repeat following three times: We take the address of the start of the string from P1, add the character pointer from Y, and fetch the character from that address to accumulator (register A), and then, we send the character to screen using kernal (sic) routine CHROUT, after which we increment our character pointer at Y. After these three repeats, increment the pointer value at P1, and increment the character counter at X. If we have not yet got tlen letters, jump back to loop start. Else, print out carriage return and return from the machine-language subroutine.

(The fancy name for the addressing mode used here is "Indirect,Y".

Interestingly, Here the code is 43 bytes and data is 50 bytes...

Stuff missing from this document at the moment

Feel free to /msg me more suggestions or corrections!

In haskell:

stutter :: [a] -> [a]
stutter [] = []
stutter l@(x : []) = l
stutter l@(x : y : []) = l
stutter l@(x : y : z : xs) = x : y : z : [] : stutter(y:z:xs)

So, what does this all mean? The first line is a type signature; it says that stutter maps a list of elements of any type a to another list of elements of type a. a is a type variable: it can stand for any type, however complex. The concept is similar to templates in C++ or generics in Ada. While it's good practice, especially for the sake of documentation, to provide a type signature, it's not really necessary. Much like ML, haskell uses Hindley-Milner type inference: by examining the way the function uses its arguments, the compiler can almost always figure out the most generic possible type of the function.

The remaining lines define the stutter function, using a pattern matching system similar to those in ML, erlang, and related languages. The pattern matching system is quite neat: patterns look like the data structures they match, except with variables. This is similar in some ways to Lisp's destructuring-bind, but much more powerful. Not only that, but it makes haskell (and ML) programs look a lot like mathematical definitions.

Anyway, back to the code. The first line after the type signature says that stutter of the empty list is the empty list. The next two say that stutter of a list with one or two elements is simply that list. `x:y:[]' means `a list beginning with x, followed by y, followed by nothing'. The : operator in haskell means basically the same thing as `.' (dotted-pair notation) in Lisp, except that : is right-associative; that is, x:y:[] means x:(y:([])); or, in Lisp terms, (x . (y . nil)) = (x y). The @ makes l a synonym for the entire expression within parentheses.

Having set up our inductive basis, we now can express the inductive step (recursion) in the last line. Here, xs (pronounced like the plural of `x') stands for `the rest of the list', and can be the empty list; in Lisp, xs would be accessed as (cdr (cdr (cdr l))). The line as a whole says that, for a list l of three or more elements, stutter of l is a list consisting of the first three elements of l, followed by stutter of the tail (cdr) of l. And there you have it.

Two interesting things to note. First of all, stutter works for more than just strings (in haskell, simply lists of characters). It also works for any list type, be it a list of integers ([Char]), a list of lists of strings ([[[Char]]], or a list of functions, each of which which maps lists of elements of type b to functions from integers to Maybe bs ([[b] -> Integer -> Maybe b), and so forth.

Second, and perhaps the defining characteristic of haskell, is the language's use of lazy evaluation. Suppose we have a very long string, and we want to know what the first five characters of its stutterization are. Of course, we could just write another function to do this. However, with haskell one can simply say:

take 4 (stutter verylongstring)

That's it. Because, in haskell, things are not evaluated until they are needed (e.g. for printing or some such), we can leave it at that. The system will not try to compute the full value of stutter verylongstring unless you need one of the last elements. For that matter, verylongstring could be an infinite list (more properly known as a stream). Until some non-deferable code such as I/O forces evaluation of this expression (directly or indirectly), the code might never be executed at all. I will not go into the details here, but with luck someone (perhaps I at a later date) will create a node on lazy evaluation or call by need.

One problem with the above function, as well as (as of 3 Apr 2002) the Lisp function in WWWWolf's writeup above, is that it is not tail-recursive (q.v.). Thus the maximum number of activation records on the stack is linear with the length of the input list. Since the definition of haskell specifies tail-recursion elimination, a tail-recursive version would be better:

tstutter :: [a] -> [a]

tstutter x = let stut :: [a] -> [a] -> [a]
                 stut [] lis = lis
                 stut l@(x : []) lis = x : lis
                 stut l@(x : y : []) lis = y : x : lis
                 stut (x : y : z : xs) lis =
                     stut (y:z:xs) (z : y : x : lis)
             in  reverse (stut x [])

In this version, stut is tail-recursive. Because the recursive call to itself is the last thing done by stut, the compiler can simply replace the parameters on the stack and execute the call---no new activation record needs to be pushed onto the stack.

In the tail-recursive version, the parameter lis to stut accumulates partial results of the computation as we recurse down the list. Recall that appending to the end of a singly-linked list is O(n) on the size of the list; thus, rather than tacking things onto the end of the accumulator parameter, stut actually builds the list backwards by prepending elements---an O(1) operation. We then reverse this list in tstutter. The transformation taking stutter to tstutter appears over and over again in functional programming.

Incidentally, tstutter demonstrates another interesting feature of haskell. Much like Python, it can be layout-sensitive. The occurrences of stut after the let should be lined up---otherwise, they may be interpreted incorrectly. Programs can be written without layout, using braces and semicolons. This is typically only done by compilers and automated program generation tools.

Object oriented vs. procedural programming

A brief examination and comparison of two prominent programming paradigms.

If you have limited experience with programming and programming languages, then you no doubt would have some experience with procedural programming. In essence, procedural methods are a somewhat "flat" way of building software. The program is designed to start at the top, and finish at the bottom. There are subroutines (sometimes labelled "functions", although functional programming is something else entirely) that take and return values, but that is as complex as it gets.

Procedural programming is simple, which is why C, Pascal, or even BASIC are taught to most beginner programmers. (They are all procedural languages, although with C++, Delphi, and Visual Basic, they have all been extended into object-orientedness.)

Object Oriented Programming (or OOP) is a more structured and organised programming methodology. Data structures and code are merged to form Classes, that contain Methods and Variables. Well-defined interfaces between classes are constructed, fostering code reuse and ease of modification and extensabiltiy. (there are some great nodes that detail many of the OO design principles, that do not bear repeating here)

So which is better? Well, there are a number of advantages to each.

Procedural programming allows more direct control over the hardware it is being run upon. Assembly code, one step above the basic machine code that processors intereperet, is procedural. Procedural languages allow the programmer to write more time-efficient code, and thus are more suitable for time-critical applications. (3d games, statistics engines, etc) Procedural applications are also simpler in design. (as mentioned above)

Object Oriented code will always be less efficient than it's procedural equivalent, and indeed any program that can be written using OO languages can be written in a procedural fashion, but the massive advantage to OO design is that as the application's functionality becomes more and more complex, the programming itself becomes more and more trivial. (whereas the inverse is true for procedural programming) Additionally, because of OO's highly structured approach to design, it is an order of magnitude easier to debug OO code than it is to debug procedural code. (which can turn into a nightmare depending on your development environment, project size, and amount of black magic you're using)

I guess the bottom line is this:

  • Choose Procedural if your project is relatively small, simple, or time-critical.
  • Choose Object-Oriented if your project is large and complex, and you want to be able to reuse your code easily.

Of course, some have been known to use a combination of both within the same project. Sometimes this is a desirable, and in my opinion excusable (if not acceptable) practice.

It has been about 2 years since I wrote this, and I'm not impressed by it. First, I have become interested in functional languages and type inference. Second, C is mostly procedural but does have function pointers, which are nearly a form of first class functions, which gives C some functional abilities. I feel the the benefits of functional languages far outweigh those of object oriented languages, all other things being equal, which they rarely are.

Object oriented vs. procedural programming

I'm not the best person to be discussing the matter, in light of the fact that I'm only a sophomore in CS at Purdue and I don't have a huge basis of experience with which to attack above made claims. On the other hand, I consider myself a somewhat well-informed and generally reasonable person, and I think there are some things to point out. Note that when I talk about procedural languages, I'm mostly referring to C (I believe it's fair to say that many others are also), but respected procedural languages like Pascal and Perl will generally have the same facilities.

My response to the OOP paradigm and those who push it as a universal good is this: Object oriented programming has limited applications (it is not universal) and can encourage certain design flaws. Procedural programming has percieved weaknesses in structure and function because some programmers who use it are disorganized and dysfunctional. I'm not talking about anyone in specific, but Microsoft comes to mind.

ESR(Eric Steve Raymond) isn't everyone's hero or anything, but he does have respect in the hacker community. I would like to make the following quote from The Art Of Unix Programming by ESR:

The OO design concept initially proved valuable in the design of graphics systems, graphical user interfaces, and certain kinds of simulation. To the surprise and gradual disillusionment of many, it has proven difficult to demonstrate significant benefits of OO outside those areas. It's worth trying to understand why.
Read the section yourself (web address below) to see the details, but the general point (or one of them) is that object oriented design tends to pile things on top of each other so that a large project can become a pile of layers that do not necessarily correspond to anything. In short, OO can lead to overcomplication. This is a point that needs bearing out in detail, which ESR has already done, for which reason you might want to look at the book.

The most prominent argument for OOP, at least from my experience, is that it is inherently structured and organized; I submit that OOP is only a tool for structure and organization, which can be misused or ignored. The more general design principle is modularization, which says that code should be written in well seperated pieces. Modularization is a simple tool which is available in almost any language; OO languages provide objects explicitly for this purpose (and others), but the exact same thing can be done in C. The most common way to this is is to provide a seperate header file and employ good taste in design (see orthogonality, other design concerns in The Art Of Unix Programming) so that those using that interface (even if you created it) do not need to concern themselves with the internals. In short, if you can structure and organize a module properly, it should be just as affective as designing the same module with object orientation.

For me, the 'bottom line' is much more general:

  • Procedural programming provides power and flexibility (enough rope to hang yourself with).
  • Object oriented programming facilitates certain (good) design practices and has logistical advantages in some situations.
  • Function programming offers an extreme amount of expressive power and flexibility, but offers it in a very safe, orthogonal manner. Functional programs for complicated programs tend to be dramatically shorter than equivalent programs in object-oriented or procedural languages. Therefore, it will be dismissed as magic by otherwise sane people.
(Do you like how I snuck functional programming in there without mentioning it in the top part? More on this after my pants.) This 'bottom line' is not saying much; I think that's perfectly alright! I think it is unwise to draw strong conclusions without a lot of strong evidence and the absolute necessity to declare a 'winner'.

Other points

Exemplary use of C: I think it is fair here to point out that the original Unix system was entirely rewritten in C, at a time when assembly was in vouge. Unix is a fairly large system, and is written entirely in a procedural manner. Modern Unix or Unix-like systems such as Linux, *BSD and The Hurd are all written in C; there is so much high-quality C code flying around that I would think very carefully before declaring that large projects should be done in an object-oriented language.

Procedural programming and the learning process: Another objection I have to OOP is that it enforces good design, rather than encouraging it. Forcing a programmer to do things a certain way does not mean that he will do them well in that way, or that he will learn to do things that way in general, only that he will one task that way. We (the CS students at Purdue) are prime candidates for jobs in systems programming, so I find it a little disturbing that the ideas of modularization and good design went nearly untouched during the first year of our training here, in favor of having to learn three programming languages and get through calculus 1 and 2. The basic principles of design and taste are more important than knowing whatever languages are in vogue. But I digress; in short, OOP does not cause programmers to learn good form any more than holding a piece of string makes it straight.

Programming paradigms versus individual languages: When discussing procedural programming versus OOP, I (and many others, I'm sure) think largely in terms of C versus C++, Java, Python, et cetera. It is reasonable to choose C to represent procedural programming because it is very well known and respected among procedural languages, and it is one of the cornerstones of modern programming.

On the subject of speed and efficiency, I have nearly nothing to say. This is something else that ought to be handled on a basis of individual languages. It the case of C vs. C++, I have no knowledge on which to pass judgement; I can only guess that C++ would show very similar speed, since it is close to the hardware in the same manner as C. Among VM centric (Java, Parrot, .NET) and interpreted languages, each has it's own performance defeats, but I don't intend to be an expert on them until I need to be.

Functional Programming: This writeup (and two others) used to be in object oriented vs. procedural programming until I had them moved; that said, it doesn't make much sense to get functional programming involved in the argument. I've taken a look at lambda calculus and poked around Scheme, and I can easily tell that functional languages have a very clear purpose, and I have no idea what it is. If someone could give me a quick rundown, I could add it here.

Sources:

I've tried to make this writeup more general and clear; let me know if I've missed something big.

I disagree with the 'bottom lines' noted in the "Object oriented vs. procedural programming" writeups.<\p>

The statement that C++ is some percent less efficient than C makes me cringe when I read it.<\p>

It was noted in the article or one of the links that often C++ get's converted to C, during compilation, so it's just as efficient. That makes me cringe too!<\p>

It is true that in C++ there exist more complicated mechanisms that must be driven, such as the Vtable, and these things do require extra cycles. But in my 12 years of experience I've learned that these issues are usually not that big of a deal. <\p>

Usually, in a program that is percieved as "slow" and in need of optimizing, the code that is executing most of this time is localized to a limited number of routines that happen to be doing a lot of work. It is usually the case that in these few places, that code that is consuming the lyon's share of the CPU cycles can be optimized in C++ to run just as fast as it would have in C. <\p>

This will leave a ton of other code, that is consuming a little more CPU cycles than it should, due to it's C++ overhead. But all that other code doesn't get run much, so it doesn't count for much of the total time.<\p>

The statement "Procedural programming allows more direct control over the hardware it is being run upon." is very misleading. There is nothing that can be done in Assembly that can't be done in C or C++. While assembly may be more direct, in most cases it's a lot harder to write and less portable, that's why it's usually avoided like the plague, and if used, only used in specific subroutines where it is deemed to be most needed. C has no particular advantage over C++ when it comes to controlling hardware. And Procedurally implemented C has no advantage over Object Oriented implementations using C++. ( in my opinion of course... )<\p>

One need be very careful when having these debates to be clear when one is talking about a language, c -vs- C++, for example, or a software design approach, procedural -vs- OO for example.<\p>

I've been using C++ for years. But I choose a design approach every time I start a new project. The approach is often "mixed." I often end up with a very procedural looking main() that creates and uses several objects.<\p>

In my opinion it all boils down to your skill set. Some people don't have the skills to do OO design and implementation. If they try to use OO or some language they don't know it's going to suck, or at least take them a long time to do a good job.<\p>

Those who have the skills and can do a job efficiently using any design approach they know, will probably choose a design they think will work, and if you ask them if they used procedural or OO, they will have to stop and think about it because they won't know. All they will know is that they chose what seemed the best at the time.<\p>

Out of the 20 or so SW Engineers I know well, most of which learned OO on the job and are good at it, this OO -vs- procedural argument never comes up. It's commonly excepted that OO is the way to go, And in fact there is usually a strong pressure among this group to get the OO work, and not get stuck doing procedural stuff.<\p>

And if you look into the folks that are having this kind of debate, it's usually people who don't know OO well and don't have a significant amount of experience using OO yet. <\p>

I found this somewhere in the interwebs in 2004, so any hope of a correct citation is long lost. Minor changes and formatting are mine, as are as the entirety of the Matlab and RPGCode entries. For those of you who had a life in your early teenage years, RPGCode is a C-based language used for scripting within the RPGToolkit, a free collection of utilities for making tile-based RPGs from scratch. Perhaps I'll node it one day. Feel free to suggest better descriptions for any language or even point out the origin, if you know it.

The basic idea is to illustrate the particular faults or quirks of each language in the context of performing a simple task. Note that even successful operation would be harmful to you, but most languages refuse to even cooperate in that, and instead find elaborate ways to be more harmful than you've intended. The few which result in benign outcomes (cf. BASIC) are humorous exceptions which illustrate the apparent impotence of that language: all the others hurt you more than you wanted them to, but these can't hurt you at all even when you want them to.

It helps if you've had some programming experience before reading this, as it makes the whole thing a lot funnier. I haven't used each of these languages personally, but the five I have used are described accurately, and they gave me enough insight into the differences between languages to understand (in most cases) what it actually says about the language's operation for it to behave this way. For example, Matlab differs from C in that programming is easier when everything is stuffed into a vector, since Matlab has so many quick vector operations. It also has a powerful graphing utility (which C entirely lacks), so even though you could shoot yourself in the foot in Matlab, you're likely to end up doing something else instead because it's so much easier that way, even though it wasn't exactly your goal. The humor of the Microsoft C++ w/ Windows SDK should be readily apparent to anyone who's used a Microsoft product, not just programmers.


The proliferation of modern programming languages which seem to have stolen countless features from each other sometimes makes it difficult to remember which language you're using. This guide is offered as a public service to help programmers in such dilemmas.

  • C: You shoot yourself in the foot.

  • Assembly: You crash the OS and overwrite the root disk. The system administrator arrives and shoots you in the foot. After a moment of contemplation, the administrator shoots himself in the foot and then hops around the room rabidly shooting at everyone in sight.

  • APL: You hear a gunshot, and there's a hole in your foot, but you don't remember enough linear algebra to understand what the heck happened.

  • C++: You accidentally create a dozen instances of yourself and shoot them all in the foot. Providing emergency medical care is impossible since you can't tell which are bitwise copies and which are just pointing at others and saying, "That's me, over there."

  • Ada: If you are dumb enough to actually use this language, the United States Department of Defense will kidnap you, stand you up on front of a firing squad, and tell the soldiers, "Shoot at his feet."

  • MODULA-2: After realizing that you can't actually accomplish anything in the language, you shoot yourself in the head.

  • Pascal: Same as Modula-2, except the bullets are the wrong type and won't pass through the barrel. The gun explodes.

  • sh,csh,etc: You can't remember the syntax for anything, so you spend five hours reading man pages before giving up. You then shoot the computer and switch to C.

  • Smalltalk: You spend so much time playing with the graphics and windowing system that your boss shoots you in the foot, takes away your workstation, and makes you develop in COBOL on a character terminal.

  • FORTRAN: You shoot yourself in each toe, iteratively, until you run out of toes, then you read in the next foot and repeat. If you run out of bullets, you continue anyway because you have no exception-processing ability.

  • ALGOL: You shoot yourself in the foot with a musket. The musket is aesthetically fascinating, and the wound baffles the adolescent medic in the emergency room.

  • COBOL: USEing a COLT45 HANDGUN, AIM gun at LEG.FOOT, THEN place ARM.HAND.FINGER on HANDGUN.TRIGGER, and SQUEEZE. THEN return HANDGUN to HOLSTER. Check whether shoelace needs to be retied.

  • BASIC: Shoot self in foot with water pistol. On big systems, continue until entire lower body is waterlogged.

  • PL/I: You consume all available system resources, including all the offline bullets. The Data Processing & Payroll Department doubles its size, triples its budget, acquires four new mainframes, and drops the original one on your foot.

  • SNOBOL: You grab your foot with your hand, then rewrite your hand to be a bullet. The act of shooting the original foot then changes your hand/bullet into yet another foot (a left foot).

  • LISP: You shoot yourself in the appendage which holds the gun with which you shoot yourself in the appendage which holds the gun with which you shoot yourself in the appendage which holds the gun with which you shoot yourself in the appendage which holds...

  • SCHEME: You shoot yourself in the appendage which holds the gun with which you shoot yourself in the appendage which holds the gun with which you shoot yourself in the appendage which holds the gun with which you shoot yourself in the appendage which holds... ...but none of the other appendages are aware of this happening.

  • English: You put your foot in your mouth, then bite it off.

  • MICROSOFT C++ w/ WINDOWS SDK: You write about 100 lines of code to print "Hello, world!" in a dialogue box, only to have a UAE pop up when you click on OK. This shuts down the program manager, leaving you nothing but a screensaver. You then fly to Washington and shoot Bill Gates in the foot.

  • LOGO: You tell a turtle to draw a picture of a foot and a gun, then shoot the turtle.

  • Matlab: You load all your toes into a vector and shoot at their average location while printing graphs of the blood spatter trajectories.

  • RPGCode: You can't find the command to pull the trigger, but you know eight different ways to make the bullets glow in the dark and cause 17 points/second of mana depletion. Your foot runs obliviously in a circle around the town well.

Log in or register to write something here or to contact authors.