Opendata, web and dolomites

Report

Teaser, summary, work performed and final results

Periodic Reporting for period 3 - FLEXILOG (Formal lexically informed logics for searching the web)

Teaser

\"The long-term aim of this research is to develop systems that can provide direct answers to questions from users, by reasoning about information that is found on the web. The specific technical challenges addressed in this project relate to how information is represented, and...

Summary

\"The long-term aim of this research is to develop systems that can provide direct answers to questions from users, by reasoning about information that is found on the web. The specific technical challenges addressed in this project relate to how information is represented, and how inferences can be made in a way that is sufficiently robust to deal with the messy nature of data on the web.

Traditionally, in the field of artificial intelligence, logics have been used to represent and reason about information. An important advantage of using logic is that the underlying reasoning processes are completely transparent. Moreover, logical representations naturally allow us to combine information coming from a variety of sources, including structured information (e.g. ontologies and knowledge graphs), information provided by domain experts or obtained through crowdsourcing, or even information expressed in natural language. However, logical inference is also very brittle. Two particularly problematic limitations in the context of web data are (i) the fact that there are no mechanisms for handling inconsistency (in most logics) and (ii) there are no mechanisms for deriving plausible conclusions in cases where \"\"hard evidence\"\" is missing. Vector space models form a popular alternative to logic based representations. The main idea is to represent objects, categories, and the relations between them, as geometric objects (e.g. points, vectors, regions) in a high-dimensional Euclidean space. Such models have proven surprisingly effective for many tasks in fields such as information retrieval, natural language processing, and machine learning. However, the underlying inference processes lack transparency, and conclusions that are derived come without guarantees. This is problematic in many applications, as it is often important that we can provide an intuitive justification to the end user about why a given statement is believed. Such justifications are moreover invaluable for debugging or assessing the performance of a system. Moreover, the black box nature of vector space representations makes it difficult to integrate them with other sources of information. The aim of this project is to combine the best of both worlds. Specifically, the aim is to derive interpretable semantic structures from vector space models, and to use these semantic structures to develop robust forms of logic based inference.\"

Work performed

The first main research line of the project is about learning suitable vector space models (also known as embeddings) from data. In particular, while there is an abundance of existing methods for learning vector space models, the models produced by these methods are typically not interpretable. One important consequence is that existing models are difficult to use in unsupervised settings (e.g. interpreting query terms in an information retrieval context) and that it is not always obvious how external background knowledge can best be incorporated into existing methods. To address these issues, we have developed a number of new methods for which there is a more direct correspondence between the geometric structure of the vector space model and the logical representation of the same domain. We have also developed two models for learning vector space embeddings that can take advantage of prior probabilities, to make the resulting representations more robust, especially in the case of entities for which relatively little information is available. We have also explored the possibility of learning higher-quality representations by combining information from multiple languages. Another line of work has looked at qualitative vector space representations.

In the second main research line we have exploited the learned vector space models for implementing different forms of commonsense reasoning. In first instance, we have focused on methods for identifying plausible missing facts in existing knowledge bases. We have also looked at inductive reasoning about relations. Further building on this work, we have also studied the automated completion of rule bases. Finally, we have also studied the problem of integrating ontologies with vector space embeddings from a theoretical point of view. For instance, we studied the computational complexity of reasoning in description logics extended with an interpolation mechanism.

The final main reseach line relates to the use of vector space representations in applications where prior logical knowledge may be missing. In particular, our aim is to combine commonsense reasoning with vector representations, on the one hand, with methods for relational learning, on the other hand, and to evaluate their potential in applications such as natural language processing. One important focus in this research line has been on learning vector representations of relations in an unsupervised way, using co-occurrence statistics from a text corpus such as Wikipedia as input. Moreover, we have developed strategies to allow for highly interpretable approaches to learning with learned rules in the context of knowledge base completion. This has been achieved by relying on possibilistic logic, which makes it possible to reason about uncertain knowledge in a way that stays close to classical logic.

Final results

While, in general, the use of geometric representations is a popular and widely used strategy, the way in which we are using these representations in this project is highly unconventional. While most existing work is aimed at learning vector representations for the purpose of encoding inputs to neural network models, our aim is to use geometric representations as an interpretable source of knowledge. This means that we try to learn spaces in which semantic notions (such as types, categories and contexts) have a direct geometric counterpart. Moreover, existing approaches almost exclusively use vectors to represent entities and concepts. In contrast, we use vectors for objects, regions for properties and categories, and subspaces for types and contexts. This leads to a much more natural representation, which is easier to interpret and to link to human models of categorisation.

We have also developed methods that use Bayesian inference over vector space representations, which means that predictions are made in a fully transparent, yet principled way. Taking inspiration from cognitive models of categorization, our models essentially implement a form of commonsense reasoning.

Another important contribution has been about learning relational data. In particular, while existing approaches assume that the relation between two entities can be predicted from the vector representations of these entities. However, we have shown that substantially better results are possible by directly learning vectors that capture such relationships. Finally, we have focused on statistical relational learning with interpretable rule-based models. This is a radical departure from existing methods, as our models are simply stratified classical theories, which are particularly easy to reason with. In contrast to earlier approaches, our approach is more interpretable, more efficient, and often more accurate.

Website & more info

More info: http://www.cs.cf.ac.uk/flexilog/.