It’s no doubt that the ontology is used mainly for knowledge representation.
In the case of knowledge representation, ideally we should express as much knowledge as we know, so that the agent can find/understand anything it needed while performing its task, which unfortunately, is impossible. The other extreme is to give agent the knowledge that it might need in the designed scenario only. In this case the agent will out of work when meeting with unexpected situation (yes, statistic methods might make things better, but don’t solve the problem).
I think ontology struggle in the middle. That is, agents share some basic cognition about the world (ontology), so that their knowledge (instantial ontology) can be shared base on basic cognition. But we can continue arguing to what an extent the basic cognition (ontology) should be modeled? Of course it is the same thing we face with knowledge presentation, because ontology itself is knowledge too.
So we can see that ontology is a kind of “modeled for the future, presented with current”. It’s difficult, who can tell what will happen in the future or not? The current solution is to collect all the experiences from everyone. We find a group of people, they argue, they fight, they compromise, and finally decide “if we have these concepts and relations, it will cover 80% of tasks in the future”. Ontology is an agreement.
So it may be shabby, because the creator has no patients with so many concepts and relations, or the agent doesn’t need so much knowledge presently, or the “expert” himself in fact knows little about the working domain. (I guess these often occur in the domain ontology construction). Thus we will feel the pressing need for ontology evolvement much more often than the change of domain. Because the need is inspired both by the incompleteness of the ontology and by the real changes of the working domain.
Many work related to this problem have been done, such as, ontology evolvement, ontology integration and ontology mapping. For the widely test and usage of these techniques I tend to argue we need some light-weighted iteration. That is, quick publish, quick use, quick review and enter the next iteration quickly. It is just like the idea behind light-weighted software implementation, or, open source. The light- weight iterations should go on until the ontology is steady. Then the changes will occurs mainly because of the changes of domain itself.
But I have to admit that the light- weighted iteration is not an easy thing. En, it’s what I will do next currently.
p.s. We know the definition of ontology is “an explicit specification of conceptualization ”, one of the feature behind this definition is “ontology is an agreement among a community”, all the discussion above is on this feature only.
p.s. this blog is written in Microsoft word using “blogger for word add-in”. I do expecting google’s own IM now.