As dictionaries explain, ontology is the philosophical study of the nature of being, what exists and what can or cannot exist. It’s one of the metaphysics, the branch of philosophy that tries to sort out the first principles of things, the ultimate roots of life, the universe and everything. What could be more basic than ‘being’ itself? Indeed, ontology is also known as ‘first philosophy’.
Until very recently, the word ontology has been known to and used by a most miniscule fraction of humanity. It’s one of those obscure, irrelevant words whose meaning even the highly educated tend to forget because they never have occasion to use it, except maybe when trying to make an impression on the easily impressed.
Nevertheless, ontology is a term that has made the great leap up from esoteric philosophy directly into engineering, bypassing the intermediate stage of science almost altogether. It has recently been sucked into the geek glossary and become a buzzword in the fields of artificial intelligence, natural language processing, knowledge engineering and, of course, the World Wide Web, specifically in the Web 3.0 incarnation ("semantic Web").
How did this happen? Let’s start back at the beginning.
Traditional Philosophical Ontology
After several millennia of deep thought by minds so bright that no ordinary person could look directly at them, ontology has divided into two main streams. One stream follows Plato and his lot in believing that realia, the things that actually exist, are beyond experience. That’s classical idealism. The other stream flows from Aristotle, Hume and a bunch of other more recent professional thinkers called the logical positivists. They tell us, not surprisingly, just the opposite: the objects of our experience are the things that actually exist. That’s often called realism, of one kind. There are a few other isms that stick to ontology as well, solipsism, monism and dualism to name the main ones. They deal mostly with how many basic kinds of stuff exist and what that might mean.
It’s Full of Objects
The idea of objects, their characterization and their possible interrelations within their existence has always been a part of ontology. In Platonic realism, the categories of things and properties of things have real existence as things in themselves and are based on a priori knowledge. Aristotle, on the other hand, held that some things exist only 'in the understanding', or as mental phenomena that are built bottom-up from experience. Regardless of one's view of the nature of an object's existence, the shift in attention from the essence of being in itself to theorizing on objects and their properties, relationships and changes began to bleed ontology into epistemology, the philosophy of knowing and understanding. I would even say that this is where philosophy gives birth to what develops over the centuries into modern science. Science emphasizes analysis and specification of relationships with the goal of making testable predictions. That process is considered to be the basis of understanding. The idealist tradition, on the other hand, can be seen as the foundation for mysticism and dualism. The strong tension between these two streams of thinking is very much alive and visible in our daily lives, but especially in fictional literature.
In the modern context, ontology has become mainly a theorizing on objects. The development of theories on the identification and definition of objects, their properties, their part-whole relationships, their class-member relationships, and other imaginable relationships has been the most active area in ontology in the last few centuries. In the last decade or so, ontology has been hijacked for use in making things understandable to machines.
Is It Reasonable?
Ontology took a big step toward becoming something actually useful and meaningful outside the knitting circles of philosophy with the introduction of formal logic in the 19th century. Edmund Husserl seems to have been the first to call the marriage of formal logic and ontology 'formal ontology'. The fruit of this marriage is that we can create formal systems that allow us to reason on the relations and properties of objects with special computer programs that are, appropriately enough, called 'reasoners' (see Cyc, for example). That lets us (and computers) make assertions that are necessarily true about things and states of affairs based on our ontology, and that is a powerful knowledge multiplier.
For example, if our ontology tells us that there is a category of animals called quadrupeds that is defined by the feature of having four legs and some guy says that a bulwargle is a quadruped, then you immediately know that a bulwargle is an animal that has four legs and shares all of the other defining features of quadrupeds, and animals, and living things, and so on.
"Wow!", you say. "Brilliant!" But you are being a bit of a sarcastic smartass. This is just common sense and you are not impressed.
What we need to remember, however, is that this is a formal system, which means that clever machines like your computer, tablet device, or smartphone can use it. This is a way for your computer (or more exactly, a program that runs on your computer) to learn and use common sense and to generate new knowledge on the basis of existing knowledge. This is the reason for all the buzz in fields such as natural language processing, knowledge engineering and artificial intelligence. The people who are engaged in designing and building practical systems in these areas are finally catching on to ways of efficiently representing knowledge structures that can be created, manipulated and shared by machines and used to interface with humans in a natural and useful manner.
You might be more impressed now. You might even be thinking of some cool ways an ontology could jazz up your favorite Web 2.0 community with a generous splash of Web 3.0.
So what does one look like?
What passes for an 'ontology' in practice can be as simple as a 'name space' for a controlled vocabulary of properties.
The current approach to creating a semantics for machines that is based on formal ontologies suffers three big problems: domain dependence, the need for laborious, time-consuming construction of specific formal ontologies by humans, and the inflexibility of the resulting ontologies.
Domain dependence means that the vocabulary of the ontology, only works for some relatively narrow area of knowledge, such as 'wine,' 'nuclear disaster', or 'teledildonics'. Usually, a group of 'experts' in the domain get together and try to agree on a well-defined vocabulary and what the terms mean. The result usually involves subjective compromises and only very roughly approximates the range of expression humans actually use. The problem is that the same terms can mean different things in different specific contexts, so it is difficult for the machines to interact across the specific and generally arbitrary domain boundaries that someone came up with in designing the ontologies.
The second problem, the labor-intensive hand-crafting of specific formal vocabularies ('ontologies') by experts, makes machine semantics very slow and expensive to implement and maintain.
The final problem is inflexibility. Once an ontology is designed and in use, it is very hard to modify. Any changes that need to be made require more hard work by experts, and the changes must not break what is already in place. Many important domains change frequently, if not constantly, so inflexibility is a serious problem.
Accepting that ontology is about objects, it is reasonable to try to represent objects as they are in the object-oriented computer programming paradigm. And when we do, magic occurs.
- What is an Ontology?
- Definitions by leading philosophers
- As construed by information scientists
- Protege, 'an open-source ontology editor and framework for building intelligent systems'