Gather round, children, and listen to the tale of the largest numbers man has ever known.
TREE(n) is a function.
The strict definition of TREE is "the length of a longest sequence of n-labeled trees T1, ..., Tm in which each Ti has at most i vertices, and no tree is embeddable into a later tree." Labeled trees are like the normal mathematical concept of trees, except each vertex can have a label. Tree embeddability is when one tree is less than or equal to another, or something. (It's weird.) It's made possible by a theorem which states something about quasi-ordering labelled trees.
Basically, it's a whole bunch of crazy graph theory stuff. The interesting part is not how TREE works internally, but what it spits out. Consider the following values of TREE(n).
TREE(1) = 1.
TREE(2) = 3.
TREE(3) is so unfathomably large that humanity has not yet been—and likely never will be—capable of inventing a way to define scales large enough to comprehend the number of orders of magnitude beyond our reach it is.
This is not a joke. In fact, it might just be an understatement.
Let's consider some other large numbers, to put this into perspective.
If at any point during the writeup, you feel dizzy, lost, confused, or scared, then you are reading this article correctly, and there is nothing wrong with you.
A computer scientist called Donald Knuth once invented a system of notation which is used to express very large numbers. This system is still in use today. The gist of it is that X ↑ Y is equal to XY (X to the exponent Y), and X ↑k Y represents a Y-item-long chain of X ↑k-1 X ↑k-1 X ↑k-1 ... ↑k-1 X. The operation is right-associative, to prevent any easy simplifications. For instance, 3 ↑2 3—which can also be represented as 3 ↑↑ 3, as the superscript is just a convenience for large numbers of arrows—is a power-tower of three 3's: 333. This is 327, or about seven-and-a-half trillion (7,625,597,484,987). As another example, below is 2 ↑3 4.
2 ↑↑↑ 4 = 2 ↑↑ 2 ↑↑ 2 ↑↑ 2
= 2 ↑↑ 2 ↑↑ (2 ↑ 2)
= 2 ↑↑ 2 ↑↑ 4
= 2 ↑↑ (2 ↑ 2 ↑ 2 ↑ 2)
= 2 ↑↑ (2 ↑ 2 ↑ 4)
= 2 ↑↑ (2 ↑ 16)
= 2 ↑↑ 65536
= 2 ↑ 2 ↑ 2 ↑ ... ↑ 2
That is to say, this is a power-tower made out of twos that is 65536 numbers high. That is to say, it's huge. Approximately 2.0×1019728 huge.
Suffice to say, things get pretty large pretty quickly with the up-arrow notation.
For those astute enough to notice that these operators are extending the typical hierarchy of operations (succession, addition, multiplication, exponentiation) to a more abstractified level, well done, and have a cookie.
Ronald Graham was investigating a problem involving n-dimensional hypercubes and coplanar graphs (see, there's the graph theory again!). He was trying to make a proof of a certain property the hypercube would have when the edges are coloured a certain way. His value for the upper bound on number of dimensions has been described by mathematicians as "fucking huge", and is known as Graham's number. (His lower bound was 6, and this was bumped up to 13 in 2008.)
Consider a sequence of numbers gn. Each number of this sequence is in the form of 3 ↑m 3, where m is some number. To be specific, each m of gn is equal to gn-1—that is to say, each number in the sequence describes the number of Knuth arrows that are present in the next number.
We can safely say that these numbers start off absolutely colossal and get even bigger, as g1 = 3 ↑↑↑↑ 3 is itself on the order of "large enough that Wikipedia's visual aid is its own section and pictures". To summarize:
3 \ 3 \ 3 \
. | . | _ _ . |
. | . | 3 \ | where the number | . | / 3 \
g = . > . > . . . 3 > 3 | of towers in this | . > | 3 = 7 625 597 484 987 |
1 3 | 3 | 3 / |_ chain is: _| 3 | \ 3 /
3 | 3 | 3 |
3 / 3 / 3 /
So there's no doubt that this stuff gets ugly quickly, if g2 = 3 ↑g1 3.
Now, you may have made the connection between the sequence being gn and the name 'Graham' starting with a 'g'. So which one is Graham's number, you might ask? It's g64.
Let that sink in for a bit. Number 64 in the stack. Where each preceding number describes the number of arrows in the next. Where the arrows are Knuth's up-arrows.
Some call the Ackermann function the predecessor of Knuth's up-arrows and of fast-growing recursive sequences in general. Others are convinced it is batshit insane.
/ n + 1 if m = 0
Ack(m, n) = < Ack(m - 1, 1) if m > 0 and n = 0
\ Ack(m - 1, Ack(m, n - 1)) if m > 0 and n > 0
It gets huge really quickly, because of the recursiveness that is activated whenever the left argument is any larger than 2. A definition using Knuth's arrows is Ack(m, n) = 2 ↑m-2 (n + 3) - 3. The reader is encouraged to play a little bit with the Ackermann function themselves, to get a sense of how quickly it explodes into a large number.
For the purposes of this writeup, let us also define the one-argument version of the Ackermann function: A(n) = Ack(n, n). It's like a compact version of the Ackermann, if that wasn't a hugely ironic statement. For reference—ha ha ha—A(4) is said to be on the order of 2 ↑ 2 ↑ 1019729.
This version of the Ackermann function is a convenient tool for mathematicians that study large numbers in number theory, as it encapulates a typically Knuth-arrow-level of hugeness naturally, as well as has historical precedent and research, being one of the oldest primitive recursive functions man has studied.
One final note. If we want to denote repeated applications of A on some number—yeah, that's right, I went there—then we will use a superscript on the A to denote how many applications we're using. For instance, A3(0) = A(A(A(0))) = A(A(1)) = A(3) = 61.
Let's talk about n(4).
The torture continues. n(k) is a function defined by Harvey Friedman, intended to solve a combinatorics problem relating to subsequences of words. According to Wikipedia, n(k) is the length of the longest possible sequence that can be constructed with a k-letter alphabet such that no block of letters xi, ..., x2i is a subsequence of any later block xj, ..., x2j. n(1) = 3 and n(2) = 11. n(3) is less than g64, but is said to be greater than 2 ↑7197 158386.
Now, n(4) is monstrous. I'd call it huge if I hadn't already use the term multiple times before, but that's what happens when you play with numbers as large as these. Where g64 might have been said to be on the order of A64(4), n(4) is something more akin to AA(187196)(1). Yes, you're reading that right, there's another A in that superscript. There are that many A's.
So, about that TREE(3) thing I mentioned.
So, we're run the gamut of large numbers. Even if you haven't sat there and tried to properly grasp what sorts of shenanigans we've just gotten up to, you're probably confident that we have a thing for this guy.
Well, we don't.
When I earlier stated that TREE(3) was incomprehensibly large, this was taking this whole writeup and more into account. I was not even kidding. TREE(3) is absolutely unparsable. n(4), which we know to be greater than any other number in this article, couldn't even hold a candle to TREE(3). TREE grows so fast, the mathematical name for how fast it grows is effectively something along the lines of "impredicatively transfinite-recursively fast". That is to say, so fast that even using mathematical tools specifically designed to measure how fast things grow, we have to call-upon objects considered infringing upon the domain of the infinite.
In conclusion. TREE(3) is—quite literally—incomprehensibly huge. There are other monster numbers than mathematicians might reference as being enormous, but TREE(3) is so much so that it's nigh-impossible to even adequately quantify how incomprehensibly huge it is.
And, of course, yo momma's bathroom scale is larger still.
A lot of this was done from memory, but in terms of fact-checking, here are some assorted sources: