Extension of "web of trust
" to quaternary, or teralemmic logic
The principle of quantum cryptography says that two parties can engage in a secure conversation without having
met before. It relies on there being some source of
entangled pairs being made available to the parties...
say a process creates two photons, and these are sent to
Alice and Bob. Now Alice and Bob have two sets of polarising
filters they can use to measure whether any the photons
are polarised horizontally or vertically. Now whenever one
party makes such a measurement, because they are entangled
photons, the other party will not be able to get any
information. By tossing random numbers, making measurements
and exchanging partial information after the fact, both
parties can gain increasing levels of trust in the
security of the channel, and then begin to use it for
encrypted conversation. Both eavesdropping and deliberate message insertion are equally difficult, and show up against the backdrop of noise.
The extension to quaternary logic is quite simple... say
the web of trust is set up so that instead of each
individual being represented by a single object in the
network, the individual may maintain up to four different
"keys" used for signing "vouching" documents... In effect,
they have keys for:
- when wish to tell the truth
- when they wish to lie
- when they wish to give an inconsistent answer, and
- when they wish to give an incomplete answer
We assume that the overall trust model includes the capability for people who are "closely" trusted by each other to be able to infer to greater degrees which keys are being used in which context. Each person will maintain a set of "liar maps" which can be used to summarise the possibilities for understanding "environmental emissions" (ie, statements made about a shared, entropic environment) from each of the other parties in its "trust group"
Where it comes back to quantum cryptography is in the way that these keys will be used later on to "vouch" for some
information. These keys are entirely "environmental", in the sense that they must all point back to some observations made among some group of people. Thus when decoding the information protected by the key, the person who holds it
must make some decisions akin to the decisions as to polarisation of each of the sub-keys: they have to pass out
sub-keys to a named authority (any participant-observer, as described later) and mark them according to which quadrant they thought the subkey lay in.
A wrong guess on the intent of a given sub-key will betray the fact that the interloper did not know that the person who generated that key was lying. When a group that combines together to generate these environmental sub-keys discovers the intrusion, they can synthesise new, innocuous data, ignore the request for information, or reschedule the compromised keys.
A scheme for implementing this can be efficiently done using
either a triangular or hexagonal lattice, where each "space" may be coloured with one of the four logical possibilties. Each of the vertices of a triangle can be marked with an O (for Observer, though "participant-observer" is more accurate). With a triangular mesh, this gives four possiblities for level of participation of the data contained within a triangle:
- 0 No observer... depends on context (some data set)
- 1 Single Observer usually means private data, but it's
more accurately described as data in the shared net
that can only be recovered through that observer as
- 2 Observation about an ongoing conversation between two
- 3 Independent observation
When three parties combine and observe an event (effectively "colouring" it), they can sign an assertion that they saw it. They can then each subtract their own knowledge of the other two parties to the observation from their world model, and go through a two-stage process of storing their private part of the key in the rest of the network, so that in their abscence, the network combined with the other two parties can reconstruct the assigned "colour" of the named event.
They do this by constructing two models of the new validation network. In one, they omit the information contributed to the key by the other two parties to the signing. In the other, they imagine that they themselves are the missing observer. They run various simulations (which involve the entire observer network, all using the liar protocol) which effectively trains the shared map so that it will reproduce the vouching certificate in the case that *either* that observer will be missing or the other two parties will be, *and* the correct environment "map" has been presented to the network. The network as a whole gains the ability to recognise where the conditions leading to the release of a vouching certificate. The original three parties can also release the certificate when presented with the same environmental data. However, it requires that all three combine at the same time... no fewer will suffice.
(this doesn't invalidate the statements made above about the ability of the network to reproduce the key in the absence of a key party--- the possibility has been calculated, but observers have to agree to its continued availability in their absence)
This scheme should also allow for mixing of information from several observer groups and message streams. The information being passed around between all observers includes quite a lot of noise as well as the real content. Using the concept of "chaffing", streams can be multiplexed, turning access of the channel into a volumetric-style problem.
It could also be used as the basis of an auctioning scheme for offering and earmarking "capacity" for certain lengths of time or "volume" in this sort of pipeline. The use of the liar protocol then ensures the most efficient use of communication and processing resources...
To sum up, there's one question that you might be interested in... if this is analogous to quantum cryptography, then where do the entangled pairs come from? Well, they come from basic assertions stated in the form "where there is this, then also that." This can be used for generating observations about a particular map (eg, when bob says "I read Slashdot every day", Bob is lying), or it can be used to describe properties of the system as a whole ("when there is a trio of observers and an event, there is the possibility in the network for all three to "vouch" that they saw the same event).
I have many more crazy ideas based around these themes.