|
Showing 1 - 14 of
14 matches in All Departments
The tableau methodology, invented in the 1950's by Beth and
Hintikka and later perfected by Smullyan and Fitting, is today one
of the most popular proof theoretical methodologies. Firstly
because it is a very intuitive tool, and secondly because it
appears to bring together the proof-theoretical and the semantical
approaches to the presentation of a logical system. The increasing
demand for improved tableau methods for various logics is mainly
prompted by extensive applications of logic in computer science,
artificial intelligence and logic programming, as well as its use
as a means of conceptual analysis in mathematics, philosophy,
linguistics and in the social sciences. In the last few years the
renewed interest in the method of analytic tableaux has generated a
plethora of new results, in classical as well as non-classical
logics. On the one hand, recent advances in tableau-based theorem
proving have drawn attention to tableaux as a powerful deduction
method for classical first-order logic, in particular for
non-clausal formulas accommodating equality. On the other hand,
there is a growing need for a diversity of non-classical logics
which can serve various applications, and for algorithmic
presentations of these logicas in a unifying framework which can
support (or suggest) a meaningful semantic interpretation. From
this point of view, the methodology of analytic tableaux seems to
be most suitable. Therefore, renewed research activity is being
devoted to investigating tableau systems for intuitionistic, modal,
temporal and many-valued logics, as well as for new families of
logics, such as non-monotonic and substructural logics. The results
require systematisation. This Handbook isthe first to provide such
a systematisation of this expanding field. It contains several
chapters on the use of tableaux methods in classical logic, but
also contains extensive discussions on: the uses of the methodology
in intuitionistic logics modal and temporal logics substructural
logics, nonmonotonic and many-valued logics the implementation of
semantic tableaux a bibliography on analytic tableaux theorem
proving. The result is a solid reference work to be used by
students and researchers in Computer Science, Artificial
Intelligence, Mathematics, Philosophy, Cognitive Sciences, Legal
Studies, Linguistics, Engineering and all the areas, whether
theoretical or applied, in which the algorithmic aspects of logical
deduction play a role.
|
Leveraging Applications of Formal Methods, Verification, and Validation - International Workshops, SARS 2011 and MLSC 2011, held under the auspices of ISoLA 2011 in Vienna, Austria, October 17-18, 2011. Revised Selected Papers (Paperback, 2012 ed.)
Reiner Hahnle, Jens Knoop, Tiziana Margaria, Dietmar Schreiner, Bernhard Steffen
|
R1,464
Discovery Miles 14 640
|
Ships in 10 - 15 working days
|
This volume contains a selection of revised papers that were
presented at the Software Aspects of Robotic Systems, SARS 2011
Workshop and the Machine Learning for System Construction, MLSC
2011 Workshop, held during October 17-18 in Vienna, Austria, under
the auspices of the International Symposium Series on Leveraging
Applications of Formal Methods, Verification, and Validation,
ISoLA. The topics covered by the papers of the SARS and the MLSC
workshop demonstrate the breadth and the richness of the respective
fields of the two workshops stretching from robot programming to
languages and compilation techniques, to real-time and fault
tolerance, to dependability, software architectures, computer
vision, cognitive robotics, multi-robot-coordination, and
simulation to bio-inspired algorithms, and from machine learning
for anomaly detection, to model construction in software product
lines to classification of web service interfaces. In addition the
SARS workshop hosted a special session on the recently launched
KOROS project on collaborating robot systems that is borne by a
consortium of researchers of the faculties of architecture and
planning, computer science, electrical engineering and information
technology, and mechanical and industrial engineering at the Vienna
University of Technology. The four papers devoted to this session
highlight important research directions pursued in this
interdisciplinary research project.
Recent years have been blessed with an abundance of logical
systems, arising from a multitude of applications. A logic can be
characterised in many different ways. Traditionally, a logic is
presented via the following three components: 1. an intuitive
non-formal motivation, perhaps tie it in to some application area
2. a semantical interpretation 3. a proof theoretical formulation.
There are several types of proof theoretical methodologies, Hilbert
style, Gentzen style, goal directed style, labelled deductive
system style, and so on. The tableau methodology, invented in the
1950s by Beth and Hintikka and later per fected by Smullyan and
Fitting, is today one of the most popular, since it appears to
bring together the proof-theoretical and the semantical approaches
to the pre of a logical system and is also very intuitive. In many
universities it is sentation the style first taught to students.
Recently interest in tableaux has become more widespread and a
community crystallised around the subject. An annual tableaux
conference is being held and proceedings are published. The present
volume is a Handbook a/Tableaux pre senting to the community a wide
coverage of tableaux systems for a variety of logics. It is written
by active members of the community and brings the reader up to
frontline research. It will be of interest to any formal logician
from any area."
This volume contains the research papers, invited papers, and
abstracts of - torials presented at the Second International
Conference on Tests and Proofs (TAP 2008) held April 9-11, 2008 in
Prato, Italy. TAP was the second conference devoted to the
convergence of proofs and tests. It combines ideas from both
areasfor the advancement of softwarequality. To provethe
correctnessof a programis to demonstrate, through impeccable
mathematical techniques, that it has no bugs; to test a programis
to run it with the expectation of discovering bugs. On the surface,
the two techniques seem contradictory: if you have proved your
program, it is fruitless to comb it for bugs; and if you are
testing it, that is surely a sign that you have given up on anyhope
of proving its correctness.Accordingly, proofs and tests have,
since the onset of software engineering research, been pursued by
distinct communities using rather di?erent techniques and tools.
And yet the development of both approaches leads to the discovery
of c- mon issues and to the realization that each may need the
other. The emergence of model checking has been one of the ?rst
signs that contradiction may yield to complementarity, but in the
past few years an increasing number of research e?orts have
encountered the need for combining proofs and tests, dropping e-
lier dogmatic views of their incompatibility and taking instead the
best of what each of these software engineering domains has to o?e
The ultimate goal of program verification is not the theory
behind the tools or the tools themselves, but the application of
the theory and tools in the software engineering process. Our
society relies on the correctness of a vast and growing amount of
software. Improving the software engineering process is an
important, long-term goal with many steps. Two of those steps are
the KeY tool and this KeY book.
|
Theorem Proving with Analytic Tableaux and Related Methods - 4th International Workshop, TABLEAUX-95, Schloss Rheinfels, St. Goar, Germany, May 7 - 10, 1995. Proceedings (Paperback, 1995 ed.)
Peter Baumgartner, Reiner Hahnle, Joachim Posegga
|
R1,608
Discovery Miles 16 080
|
Ships in 10 - 15 working days
|
This volume constitutes the proceedings of the 4th International
Workshop on Theorem Proving with Analytic Tableaux and Related
Methods, TABLEAU '95, held at Schloss Rheinfels, St. Goar, Germany
in May 1995.
Originally tableau calculi and their relatives were favored
primarily as a pedagogical device because of their advantages at
the presentation level. The 23 full revised papers in this book
bear witness that these methods have now gained fundamental
importance in theorem proving, particularly as competitors for
resolution methods. The book is organized in sections on
extensions, modal logic, intuitionistic logic, the connection
method and model elimination, non-clausal proof procedures, linear
logic, higher-order logic, and applications"
This book constitutes a self-contained and unified approach to
automated reasoning in multiple-valued logics (MVL) developed by
the author. Moreover, it contains a virtually complete account of
other approaches to automated reasoning in MVL. This is the first
overview of this subfield of automated reasoning ever given.
Finally, a variety of applications of automated reasoning in MVL
including several short case studies are listed. Automated
reasoning in non-classical logics is an essential subtask of many
AI applications. Applications of MVL in particular include, for
instance, hardware and software verification, reasoning with
incomplete or inconsistent knowledge, and natural language
processing. Therefore, efficient theorem proving methods in MVL are
essential. In the historical part of the book it is demonstrated
why existing approaches are inadequate. In the original part a
simple, but powerful, concept called 'sets-as-signs' is introduced
in the context of semantic tableaux, and subsequently is applied to
a variety of calculi including resolution and dissolution. It is
shown that 'sets-as-signs' yields a many-valued extension of the
well-known relationship between classical logic and integer
programming. As a consequence, automated reasoning in
infinitely-valued logics can be done uniformly and efficiently for
the first time.
This book presents reflections on the occasion of 20 years on the
KeY project that focuses on deductive software verification.Since
the inception of the KeY project two decades ago, the area of
deductive verification has evolved considerably. Support for real
world programming languages by deductive program verification tools
has become prevalent. This required to overcome significant
theoretical and technical challenges to support advanced software
engineering and programming concepts. The community became more
interconnected with a competitive, but friendly and supportive
environment. We took the 20-year anniversary of KeY as an
opportunity to invite researchers, inside and outside of the
project, to contribute to a book capturing some state-of-the-art
developments in the field. We received thirteen contributions from
recognized experts of the field addressing the latest challenges.
The topics of the contributions range from tool development,
effciency and usability considerations to novel specification and
verification methods. This book should offer the reader an
up-to-date impression of the current state of art in deductive
verification, and we hope, inspire her to contribute to the field
and to join forces. We are looking forward to meeting you at the
next conference, to listen to your research talks and the resulting
fruitful discussions and collaborations.
Machine learning of software artefacts is an emerging area of
interaction between the machine learning and software analysis
communities. Increased productivity in software engineering relies
on the creation of new adaptive, scalable tools that can analyse
large and continuously changing software systems. These require new
software analysis techniques based on machine learning, such as
learning-based software testing, invariant generation or code
synthesis. Machine learning is a powerful paradigm that provides
novel approaches to automating the generation of models and other
essential software artifacts. This volume originates from a
Dagstuhl Seminar entitled "Machine Learning for Dynamic Software
Analysis: Potentials and Limits" held in April 2016. The seminar
focused on fostering a spirit of collaboration in order to share
insights and to expand and strengthen the cross-fertilisation
between the machine learning and software analysis communities. The
book provides an overview of the machine learning techniques that
can be used for software analysis and presents example applications
of their use. Besides an introductory chapter, the book is
structured into three parts: testing and learning, extension of
automata learning, and integrative approaches.
|
Formal Methods for Components and Objects - 11th International Symposium, FMCO 2012, Bertinoro, Italy, September 24-28, 2012, Revised Lectures (Paperback, 2013 ed.)
Elena Giachino, Reiner Hahnle, Frank S De Boer, Marcello M. Bonsangue
|
R2,021
Discovery Miles 20 210
|
Ships in 10 - 15 working days
|
This book constitutes revised lectures from the 11th Symposium on
Formal Methods for Components and Object, FMCO 2012, held in
Bertinoro, Italy, in September 2012. The 8 lectures featured in
this volume are by world-renowned experts within the area of formal
models for objects and components. The book provides a unique
combination of ideas on software engineering and formal methods
which reflect the expanding body of knowledge on modern software
systems.
|
Fundamental Approaches to Software Engineering - 22nd International Conference, FASE 2019, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2019, Prague, Czech Republic, April 6-11, 2019, Proceedings (Paperback, 1st ed. 2019)
Reiner Hahnle, Wil Van Der Aalst
|
R1,637
Discovery Miles 16 370
|
Ships in 10 - 15 working days
|
This book is Open Access under a CC BY licence. This book
constitutes the proceedings of the 22nd International Conference on
Fundamental Approaches to Software Engineering, FASE 2019, which
took place in Prague, Czech Republic in April 2019, held as Part of
the European Joint Conferences on Theory and Practice of Software,
ETAPS 2019.The 24 papers presented in this volume were carefully
reviewed and selected from 94 submissions. The papers are organized
in topical sections named: software verification; model-driven
development and model transformation; software evolution and
requirements engineering; specification, design, and implementation
of particular classes of systems; and software testing.
Static analysis of software with deductive methods is a highly
dynamic field of research on the verge of becoming a mainstream
technology in software engineering. It consists of a large
portfolio of - mostly fully automated - analyses: formal
verification, test generation, security analysis, visualization,
and debugging. All of them are realized in the state-of-art
deductive verification framework KeY. This book is the definitive
guide to KeY that lets you explore the full potential of deductive
software verification in practice. It contains the complete theory
behind KeY for active researchers who want to understand it in
depth or use it in their own work. But the book also features fully
self-contained chapters on the Java Modeling Language and on Using
KeY that require nothing else than familiarity with Java. All other
chapters are accessible for graduate students (M.Sc. level and
beyond). The KeY framework is free and open software, downloadable
from the book companion website which contains also all code
examples mentioned in this book.
|
|