Opendata, web and dolomites

Report

Teaser, summary, work performed and final results

Periodic Reporting for period 2 - FOVEDIS (Formal specification and verification of distributed data structures)

Teaser

The future of the computing technology relies on fast access, transformation, and exchange of data across large-scale networks such as the Internet. The design of software systems that support high-frequency parallel accesses to high-quantity data is a fundamental challenge...

Summary

The future of the computing technology relies on fast access, transformation, and exchange of data across large-scale networks such as the Internet. The design of software systems that support high-frequency parallel accesses to high-quantity data is a fundamental challenge. As more scalable alternatives to traditional relational databases, distributed data structures (DDSs) are at the basis of a wide range of automated services, for now, and for the foreseeable future.

This proposal aims to improve our understanding of the theoretical foundations of DDSs. The design and the usage of DDSs are based on new principles, for which we currently lack rigorous engineering methodologies. Specifically, we lack design procedures based on precise specifications, and automated reasoning techniques for enhancing the reliability of the engineering process.

The targeted breakthrough of this proposal is developing automated formal methods for rigorous engineering of DDSs. A first objective is to define coherent formal specifications that provide precise requirements at design time and explicit guarantees during their usage. Then, we will investigate practical programming principles, compatible with these specifications, for building applications that use DDSs. Finally, we will develop efficient automated reasoning techniques for debugging or validating DDS implementations against their specifications. The principles underlying automated reasoning are also important for identifying best practices in the design of these complex systems to increase confidence in their correctness. The developed methodologies based on formal specifications will thus benefit both the conception and automated validation of DDS implementations and the applications that use them.

Work performed

To ensure persistence and availability of data, the state of a distributed data structure (DDS) is replicated at multiple sites in the network and the effect of concurrent operations is described by various notions of weak consistency. During the reporting period we have investigated one of the most adopted notion of weak consistency called causal consistency, which however has different meanings depending on the context and the targeted applications. We have studied three variations of causal consistency occurring in practice, the relationships between them, and the problem of verifying conformance of a DDS implementation with respect to such criteria. Furthermore, we have considered the problem of specifying weakly consistent data structures that support different levels of consistency, e.g., a key-value store providing both eventual and strongly consistent query operations. The study of such specification formalisms is important for identifying the precise guarantees offered by a DDS to its clients.

Besides the formalization issues, much of our effort focused on defining new methodologies for verifying whether a given DDS adheres to some given consistency criterion. We have provided efficient algorithms for testing a DDS which sidestep the theoretical exponential complexity of the problem. More precisely, we have provided practically efficient algorithms for checking a large class of weak consistency criteria and shown that under natural assumptions, the problem of checking causal consistency for a given execution is actually polynomial time. Furthermore, we have defined a large class of Abstract Data Types (ADTs) for which checking linearizability of an execution (for any implementation of an ADT in that class) is also polynomial time. While testing is obviously incomplete in general, we have studied various reductions of the problem of checking that all the executions of an implementation are correct (with respect to some weak consistency criterion) to classic verification problems like assertion checking. We have focused on widely-used consistency criteria like causal consistency and linearizability and standard ADTs like queues, stacks, or key-value maps. To complement such reductions, we have also provided new algorithmic techniques for verifying assertions in message-passing programs (the standard paradigm used by DDS implementations), across different versions of the same program, and in heap manipulating programs (modeling the behavior of a single replica).

Programming over weakly consistent parallel infrastructures like DDSs is a difficult problem. The inherently nondeterministic semantics is the root of many programming errors. A formal programming abstraction that is simple, yet exposes both the potential benefits and the dangers of the underlying infrastructure would go a long way in simplifying the job of programmers. We have investigated several such programming abstractions that apply to particular infrastructures like asynchronous programs and TSO memory models, which strive to make infrastructure details like the number of processes, the communication delay, transparent to programmers.

Final results

For future work, we plan to develop new specification and verification methodologies for Distributed Data Structures (DDSs), the overall objectives being to make specification formalisms simpler to reason about in the context of applications built on top of DDSs and enrich the class of DDS implementations that support formal verification.

Concerning the first aspect, we plan to focus on the specification of conflict resolution policies, used to define the outcome of a set of concurrent updates, e.g., commutative conflict resolution policies used in CRDTs, deriving reference implementations for consistency criteria that are useful to programmers as a substitute of the intricate implementations in the process of developing applications, and specifying DDSs that implement transactions. Transactions simplify concurrent programming by enabling computations on shared data that are isolated from other concurrent computations and resilient to failures, and they became prevalent in modern implementations of DDSs. Our overarching principle is to develop specification formalisms that lead to efficient verification algorithms and at the same time, are simple to program against. 

Based on these formalisms, we plan to investigate further the theoretical limits of automated verification in the context of different classes of implementations and weak consistency criteria, and define new (approximated) algorithms for debugging or verifying DDSs. Since the problem of verifying DDSs is undecidable in general, this requires identifying natural restrictions on DDS implementations (that correspond to classes of implementations occurring in practice) and approximation criteria acceptable in practice.

Furthermore, in the context of programming applications on top of DDSs, one major difficulty is dealing with scenarios where the replicas are inconsistent. Since it is expected that during a considerable amount of time the replicas are consistent, the usual approach is to speculate and assume that the invocations of the data structure are strongly-consistent. When this assumption is violated, the application has to compensate for any incorrect action taken in the meantime and restore the expected guarantees. Since potential errors usually manifest only when the application is run at the largest scale, verification techniques that validate the placement and the content of the compensation code are strongly needed. We are planning to investigate well-defined principles for automated reasoning about violations caused by weak consistency, and compensation code, which provide strong and formal guarantees. At present, such principles are missing which makes rigorous development of distributed software a laborious and error-prone task.