Opendata, web and dolomites

Report

Teaser, summary, work performed and final results

Periodic Reporting for period 1 - LOGIVIS (The logics of information visualisation)

Teaser

At the most general level, the goal of this project is to develop a philosophical framework in which we can explain why visualisation works when it works, and why it fails when it fails. Visualisation, in this context, refers to the use of visual artefacts, like charts, that...

Summary

At the most general level, the goal of this project is to develop a philosophical framework in which we can explain why visualisation works when it works, and why it fails when it fails. Visualisation, in this context, refers to the use of visual artefacts, like charts, that depict data, and that are used to reason about these data (visualisation as a tool for inference) or to argue in favour of a conclusion that is (supposedly) supported by the data that are depicted (visualisation as an argumentative or rhetorical device). In that sense, a visualisation is successful if it allows for reliable and efficient reasoning about the data, or if it successfully and correctly supports a given conclusion. It fails when it doesn’t. In particular, it fails when the reasoning or argumentation it supports is somehow fallacious; that is, if it misrepresents the data because it inadvertently or consciously distorts what can be concluded from these data.

The background against which this project is developed is double: on the one hand there is the role of visualisation within the new epistemic practices that are associated with the data-revolution (data science, analytics, the use of algorithms); on the other, there is the call within the visualisation-sciences (information visualisation, scientific visualisation, and visual analytics) to develop new theoretical frameworks that can inform the practice of visualisation, drive innovation, and lead to better predictions regarding the effectiveness of visualisations. In this context, this project strives to contribute to the critical reflexes and the development of epistemic standards that are needed as new epistemic practices arise, and to narrow the gap between existing theories on visualisation and insights (from logic, epistemology, and the philosophy of science) regarding the epistemic value of visualisations.

Progress within this project was made on two levels:
First, a more precise characterisation of the epistemological problem of visualisation was developed by (a) contrasting the problem of visualisation in the philosophical literature with how it is approached within the visualisation sciences; (b) disambiguating the object-level and meta-level inference problems in visualisation; and (c) analysing this meta-level problem as a design-problem.
Second, a formal analysis of data-transformations was developed in which it is possible to reason about simple and complex data-objects, as well as about the transformations (combine, aggregate, abstract, ...) we rely on to construct and modify such data-objects.

Work performed

\"The project resulted in 11 seminar, workshop or conference-presentations, 6 completed research-papers (4 published, 2 submitted), 3 research-papers in progress, the organisation of 3 workshops, and the preparation of 1 special issue (still in progress).

Work on \"\"The Problem of Visualisation\"\" and \"\"The Design Problem of Visualisation\"\" contributed the following insights:
(1) Understanding what is at stake epistemologically in visualisation requires us to contrast (a) the philosophical and the technical problem of visualisation (what is it vs how do we make it?), (b) the epistemological and the computational problem of visualisation (how is a visualisation related to its target, i.e. what it depicts, represents, conveys information about, vs how is a visualisation generated and consumed), and (c) the semantic and the syntactical problem (what does it mean or tell us vs how does it encode a data-object).
(2) Understanding the role of inference in visualisation requires us to clearly distinguish the object-level and meta-level inference problems in visualisation.
(3) The meta-level problem of visualisation is an ampliative or non-deductive inference problem that is best understood as a design-problem. That is, a problem whose solution does not require more or better data, but better insight in the object-level problem (the requirements) and more knowledge of the design-space. From a formal point of view, this relates the meta-level problem to so-called characterisation problems (how do we characterise and describe the logical spaces in which we organise different possibilities, and how do we unambiguously single out a selection of those possibilities). From an applied perspective, this establishes connections with two existing lines of research within the visualisation-sciences, namely the development of taxonomies of visualisations and visual actions (our options), and the development of specification-languages (the formal languages we use to unambiguously describe a graphical representation of data-objects).

Work on \"\"The Logical Analysis of Visualisation-Operations and Data-Transformations\"\" contributed the following insights:
(4) The unification of a philosophical outlook on visualisation that is based on information-flow across networks of abstraction with the technical outlook that approaches the problem of visualisation in terms of coding and de-coding.
(5) The formulation of a qualitative or logical counterpart of recent work done by Min Chen et al. on the use of Shannon’s information-theory in the context of visualisation.
(6) A formal reconstruction of some insights from Bertin’s classic “Semiology of Graphics”.

The unexpected applications of this work concern classification-practices and profiling, and contributed insights on:
(7) classification, abstraction, and their role in prediction.
(8) The generation of information-asymmetries in profiling.
(10) A new understanding of the epistemic risks in profiling practices.\"

Final results

Throughout this project, progress beyond the state of the art was achieved at two levels. First, by trying to integrate the scientific and engineering perspectives on visualisations that are mainly aimed at efficiency and effective design with the philosophical perspective that is primarily focused on reliable representation and valid reasoning. This makes it possible to make philosophical progress while remaining in touch with developing practices and is a precondition for developing philosophical frameworks that can also contribute to the search for better foundational theories of visualisation. Second, by developing a basic formal framework for reasoning about data-manipulations it became possible to diagnose epistemic risks that are associated with the processing of data while relying on minimal assumptions about what happens when data are aggregated and generalisations are made specific.

At a more general level, this project sought to move beyond the state of the art in how, on the one hand, critical perspectives on data-practices and representational practices are usually developed by integrating them into a more formal setting and focusing on the inferential processes within such practices, and, on the other hand, by applying logical tools in a context where they are rarely used for philosophical (rather than for technical) purposes, namely to evaluated epistemic processes wherein technology plays a central role.

Website & more info

More info: https://www.oii.ox.ac.uk/research/projects/logivis-the-logics-of-information-visualisation/.