![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Mathematical theory of computation
This book constitutes the thoroughly refereed proceedings of the 11th International Conference on Web and Internet Economics, WINE 2015, held in Amsterdam, The Netherlands, in December 2015. The 30 regular papers presented together with 8 abstracts were carefully reviewed and selected from 142 submissions and cover results on incentives and computation in theoretical computer science, artificial intelligence, and microeconomics.
This book provides an overview of the confluence of ideas in Turing's era and work and examines the impact of his work on mathematical logic and theoretical computer science. It combines contributions by well-known scientists on the history and philosophy of computability theory as well as on generalised Turing computability. By looking at the roots and at the philosophical and technical influence of Turing's work, it is possible to gather new perspectives and new research topics which might be considered as a continuation of Turing's working ideas well into the 21st century. The Stored-Program Universal Computer: Did Zuse Anticipate Turing and von Neumann?" is available open access under a Creative Commons Attribution 4.0 International License via link.springer.com
This pioneering book presents new models for the thermomechanical behavior of composite materials and structures taking into account internal physico-chemical transformations such as thermodecomposition, sublimation and melting at high temperatures (up to 3000 K). It is of great importance for the design of new thermostable materials and for the investigation of reliability and fire safety of composite structures. It also supports the investigation of interaction of composites with laser irradiation and the design of heat-shield systems. Structural methods are presented for calculating the effective mechanical and thermal properties of matrices, fibres and unidirectional, reinforced by dispersed particles and textile composites, in terms of properties of their constituent phases. Useful calculation methods are developed for characteristics such as the rate of thermomechanical erosion of composites under high-speed flow and the heat deformation of composites with account of chemical shrinkage. The author expansively compares modeling results with experimental data, and readers will find unique experimental results on mechanical and thermal properties of composites under temperatures up to 3000 K. Chapters show how the behavior of composite shells under high temperatures is simulated by the finite-element method and so cylindrical and axisymmetric composite shells and composite plates are investigated under local high-temperature heating. < The book will be of interest to researchers and to engineers designing composite structures, and invaluable to materials scientists developing advanced performance thermostable materials.
This book discusses state estimation of stochastic dynamic systems from noisy measurements, specifically sequential Bayesian estimation and nonlinear or stochastic filtering. The class of solutions presented in this book is based on the Monte Carlo statistical method. Although the resulting algorithms, known as particle filters, have been around for more than a decade, the recent theoretical developments of sequential Bayesian estimation in the framework of random set theory have provided new opportunities which are not widely known and are covered in this book. This book is ideal for graduate students, researchers, scientists and engineers interested in Bayesian estimation.
This book reviews the theoretical concepts, leading-edge techniques and practical tools involved in the latest multi-disciplinary approaches addressing the challenges of big data. Illuminating perspectives from both academia and industry are presented by an international selection of experts in big data science. Topics and features: describes the innovative advances in theoretical aspects of big data, predictive analytics and cloud-based architectures; examines the applications and implementations that utilize big data in cloud architectures; surveys the state of the art in architectural approaches to the provision of cloud-based big data analytics functions; identifies potential research directions and technologies to facilitate the realization of emerging business models through big data approaches; provides relevant theoretical frameworks, empirical research findings, and numerous case studies; discusses real-world applications of algorithms and techniques to address the challenges of big datasets.
The cryptosystems based on the Integer Factorization Problem (IFP), the Discrete Logarithm Problem (DLP) and the Elliptic Curve Discrete Logarithm Problem (ECDLP) are essentially the only three types of practical public-key cryptosystems in use. The security of these cryptosystems relies heavily on these three infeasible problems, as no polynomial-time algorithms exist for them so far. However, polynomial-time quantum algorithms for IFP, DLP and ECDLP do exist, provided that a practical quantum computer exists. Quantum Attacks on Public-Key Cryptosystems presemts almost all known quantum computing based attacks on public-key cryptosystems, with an emphasis on quantum algorithms for IFP, DLP, and ECDLP. It also discusses some quantum resistant cryptosystems to replace the IFP, DLP and ECDLP based cryptosystems. This book is intended to be used either as a graduate text in computing, communications and mathematics, or as a basic reference in the field.
The chapters of this volume all have their own level of
presentation. The topics have been chosen based on the active
research interest associated with them. Since the interest in some
topics is older than that in others, some presentations contain
fundamental definitions and basic results while others relate very
little of the elementary theory behind them and aim directly toward
an exposition of advanced results. Presentations of the latter sort
are in some cases restricted to a short survey of recent results
(due to the complexity of the methods and proofs themselves). Hence
the variation in level of presentation from chapter to chapter only
reflects the conceptual situation itself. One example of this is
the collective efforts to develop an acceptable theory of
computation on the real numbers. The last two decades has seen at
least two new definitions of effective operations on the real
numbers.
This new book on mathematical logic by Jeremy Avigad gives a thorough introduction to the fundamental results and methods of the subject from the syntactic point of view, emphasizing logic as the study of formal languages and systems and their proper use. Topics include proof theory, model theory, the theory of computability, and axiomatic foundations, with special emphasis given to aspects of mathematical logic that are fundamental to computer science, including deductive systems, constructive logic, the simply typed lambda calculus, and type-theoretic foundations. Clear and engaging, with plentiful examples and exercises, it is an excellent introduction to the subject for graduate students and advanced undergraduates who are interested in logic in mathematics, computer science, and philosophy, and an invaluable reference for any practicing logician's bookshelf.
Dyadic (Walsh) analysis emerged as a new research area in applied mathematics and engineering in early seventies within attempts to provide answers to demands from practice related to application of spectral analysis of different classes of signals, including audio, video, sonar, and radar signals. In the meantime, it evolved in a mature mathematical discipline with fundamental results and important features providing basis for various applications. The book will provide fundamentals of the area through reprinting carefully selected earlier publications followed by overview of recent results concerning particular subjects in the area written by experts, most of them being founders of the field, and some of their followers. In this way, this first volume of the two volume book offers a rather complete coverage of the development of dyadic Walsh analysis, and provides a deep insight into its mathematical foundations necessary for consideration of generalizations and applications that are the subject of the second volume. The presented theory is quite sufficient to be a basis for further research in the subject area as well as to be applied in solving certain new problems or improving existing solutions for tasks in the areas which motivated development of the dyadic analysis.
The second volume of the two volumes book is dedicated to various extensions and generalizations of Dyadic (Walsh) analysis and related applications. Considered are dyadic derivatives on Vilenkin groups and various other Abelian and finite non-Abelian groups. Since some important results were developed in former Soviet Union and China, we provide overviews of former work in these countries. Further, we present translations of three papers that were initially published in Chinese. The presentation continues with chapters written by experts in the area presenting discussions of applications of these results in specific tasks in the area of signal processing and system theory. Efficient computing of related differential operators on contemporary hardware, including graphics processing units, is also considered, which makes the methods and techniques of dyadic analysis and generalizations computationally feasible. The volume 2 of the book ends with a chapter presenting open problems pointed out by several experts in the area.
This book constitutes the refereed proceedings of the 18th European Conference on Genetic Programming, EuroGP 2015, held in Copenhagen, Spain, in April 2015 co-located with the Evo 2015 events, EvoCOP, Evo MUSART and Evo Applications. The 12 revised full papers presented together with 6 poster papers were carefully reviewed and selected form 36 submissions. The wide range of topics in this volume reflects the current state of research in the field. Thus, we see topics as diverse as semantic methods, recursive programs, grammatical methods, coevolution, Cartesian GP, feature selection, initialisation procedures, ensemble methods and search objectives; and applications including text processing, cryptography, numerical modelling, software parallelisation, creation and optimisation of circuits, multi-class classification, scheduling and artificial intelligence.
Scientific Computing for Scientists and Engineers is designed to teach undergraduate students relevant numerical methods and required fundamentals in scientific computing. Most problems in science and engineering require the solution of mathematical problems, most of which can only be done on a computer. Accurately approximating those problems requires solving differential equations and linear systems with millions of unknowns, and smart algorithms can be used on computers to reduce calculation times from years to minutes or even seconds. This book explains: How can we approximate these important mathematical processes? How accurate are our approximations? How efficient are our approximations? Scientific Computing for Scientists and Engineers covers: An introduction to a wide range of numerical methods for linear systems, eigenvalue problems, differential equations, numerical integration, and nonlinear problems; Scientific computing fundamentals like floating point representation of numbers and convergence; Analysis of accuracy and efficiency; Simple programming examples in MATLAB to illustrate the algorithms and to solve real life problems; Exercises to reinforce all topics.
This book introduces the quantum mechanical framework to information retrieval scientists seeking a new perspective on foundational problems. As such, it concentrates on the main notions of the quantum mechanical framework and describes an innovative range of concepts and tools for modeling information representation and retrieval processes. The book is divided into four chapters. Chapter 1 illustrates the main modeling concepts for information retrieval (including Boolean logic, vector spaces, probabilistic models, and machine-learning based approaches), which will be examined further in subsequent chapters. Next, chapter 2 briefly explains the main concepts of the quantum mechanical framework, focusing on approaches linked to information retrieval such as interference, superposition and entanglement. Chapter 3 then reviews the research conducted at the intersection between information retrieval and the quantum mechanical framework. The chapter is subdivided into a number of topics, and each description ends with a section suggesting the most important reference resources. Lastly, chapter 4 offers suggestions for future research, briefly outlining the most essential and promising research directions to fully leverage the quantum mechanical framework for effective and efficient information retrieval systems. This book is especially intended for researchers working in information retrieval, database systems and machine learning who want to acquire a clear picture of the potential offered by the quantum mechanical framework in their own research area. Above all, the book offers clear guidance on whether, why and when to effectively use the mathematical formalism and the concepts of the quantum mechanical framework to address various foundational issues in information retrieval.
This book constitutes the thoroughly refereed conference proceedings of the 9th International Workshop on Algorithms and Computation, WALCOM 2015, held in Dhaka, Bangladesh, in February 2015. The 26 revised full papers presented together with 3 invited talks were carefully reviewed and selected from 85 submissions. The papers are organized in topical sections on approximation algorithms, data structures and algorithms, computational geometry, combinatorial algorithms, distributed and online algorithms, graph drawing and algorithms, combinatorial problems and complexity, and graph enumeration and algorithms.
This book constitutes the refereed proceedings of the 7th International Symposium on Engineering Secure Software and Systems, ESSoS 2015, held in Milan, Italy, in March 2015. The 11 full papers presented together with 5 short papers were carefully reviewed and selected from 41 submissions. The symposium features the following topics: formal methods; cloud passwords; machine learning; measurements ontologies; and access control.
A formal method is not the main engine of a development process, its contribution is to improve system dependability by motivating formalisation where useful. This book summarizes the results of the DEPLOY research project on engineering methods for dependable systems through the industrial deployment of formal methods in software development. The applications considered were in automotive, aerospace, railway, and enterprise information systems, and microprocessor design. The project introduced a formal method, Event-B, into several industrial organisations and built on the lessons learned to provide an ecosystem of better tools, documentation and support to help others to select and introduce rigorous systems engineering methods. The contributing authors report on these projects and the lessons learned. For the academic and research partners and the tool vendors, the project identified improvements required in the methods and supporting tools, while the industrial partners learned about the value of formal methods in general. A particular feature of the book is the frank assessment of the managerial and organisational challenges, the weaknesses in some current methods and supporting tools, and the ways in which they can be successfully overcome. The book will be of value to academic researchers, systems and software engineers developing critical systems, industrial managers, policymakers, and regulators.
This work presents the Clifford-Cauchy-Dirac (CCD) technique for solving problems involving the scattering of electromagnetic radiation from materials of all kinds. It allows anyone who is interested to master techniques that lead to simpler and more efficient solutions to problems of electromagnetic scattering than are currently in use. The technique is formulated in terms of the Cauchy kernel, single integrals, Clifford algebra and a whole-field approach. This is in contrast to many conventional techniques that are formulated in terms of Green's functions, double integrals, vector calculus and the combined field integral equation (CFIE). Whereas these conventional techniques lead to an implementation using the method of moments (MoM), the CCD technique is implemented as alternating projections onto convex sets in a Banach space. The ultimate outcome is an integral formulation that lends itself to a more direct and efficient solution than conventionally is the case, and applies without exception to all types of materials. On any particular machine, it results in either a faster solution for a given problem or the ability to solve problems of greater complexity. The Clifford-Cauchy-Dirac technique offers very real and significant advantages in uniformity, complexity, speed, storage, stability, consistency and accuracy.
This book develops a coherent and quite general theoretical approach to algorithm design for iterative learning control based on the use of operator representations and quadratic optimization concepts including the related ideas of inverse model control and gradient-based design. Using detailed examples taken from linear, discrete and continuous-time systems, the author gives the reader access to theories based on either signal or parameter optimization. Although the two approaches are shown to be related in a formal mathematical sense, the text presents them separately as their relevant algorithm design issues are distinct and give rise to different performance capabilities. Together with algorithm design, the text demonstrates the underlying robustness of the paradigm and also includes new control laws that are capable of incorporating input and output constraints, enable the algorithm to reconfigure systematically in order to meet the requirements of different reference and auxiliary signals and also to support new properties such as spectral annihilation. Iterative Learning Control will interest academics and graduate students working in control who will find it a useful reference to the current status of a powerful and increasingly popular method of control. The depth of background theory and links to practical systems will be of use to engineers responsible for precision repetitive processes.
This unique textbook/reference presents unified coverage of bioinformatics topics relating to both biological sequences and biological networks, providing an in-depth analysis of cutting-edge distributed algorithms, as well as of relevant sequential algorithms. In addition to introducing the latest algorithms in this area, more than fifteen new distributed algorithms are also proposed. Topics and features: reviews a range of open challenges in biological sequences and networks; describes in detail both sequential and parallel/distributed algorithms for each problem; suggests approaches for distributed algorithms as possible extensions to sequential algorithms, when the distributed algorithms for the topic are scarce; proposes a number of new distributed algorithms in each chapter, to serve as potential starting points for further research; concludes each chapter with self-test exercises, a summary of the key points, a comparison of the algorithms described, and a literature review.
This book is a comprehensive, systematic survey of the synthesis problem, and of region theory which underlies its solution, covering the related theory, algorithms, and applications. The authors focus on safe Petri nets and place/transition nets (P/T-nets), treating synthesis as an automated process which, given behavioural specifications or partial specifications of a system to be realized, decides whether the specifications are feasible, and then produces a Petri net realizing them exactly, or if this is not possible produces a Petri net realizing an optimal approximation of the specifications. In Part I the authors introduce elementary net synthesis. In Part II they explain variations of elementary net synthesis and the unified theory of net synthesis. The first three chapters of Part III address the linear algebraic structure of regions, synthesis of P/T-nets from finite initialized transition systems, and the synthesis of unbounded P/T-nets. Finally, the last chapter in Part III and the chapters in Part IV cover more advanced topics and applications: P/T-net with the step firing rule, extracting concurrency from transition systems, process discovery, supervisory control, and the design of speed-independent circuits. Most chapters conclude with exercises, and the book is a valuable reference for both graduate students of computer science and electrical engineering and researchers and engineers in this domain.
Meshfree methods are a modern alternative to classical mesh-based discretization techniques such as finite differences or finite element methods. Especially in a time-dependent setting or in the treatment of problems with strongly singular solutions their independence of a mesh makes these methods highly attractive. This volume collects selected papers presented at the Sixth International Workshop on Meshfree Methods held in Bonn, Germany in October 2011. They address various aspects of this very active research field and cover topics from applied mathematics, physics and engineering.
This volume is the first ever collection devoted to the field of proof-theoretic semantics. Contributions address topics including the systematics of introduction and elimination rules and proofs of normalization, the categorial characterization of deductions, the relation between Heyting's and Gentzen's approaches to meaning, knowability paradoxes, proof-theoretic foundations of set theory, Dummett's justification of logical laws, Kreisel's theory of constructions, paradoxical reasoning, and the defence of model theory. The field of proof-theoretic semantics has existed for almost 50 years, but the term itself was proposed by Schroeder-Heister in the 1980s. Proof-theoretic semantics explains the meaning of linguistic expressions in general and of logical constants in particular in terms of the notion of proof. This volume emerges from presentations at the Second International Conference on Proof-Theoretic Semantics in Tubingen in 2013, where contributing authors were asked to provide a self-contained description and analysis of a significant research question in this area. The contributions are representative of the field and should be of interest to logicians, philosophers, and mathematicians alike.
This book offers a comprehensive collection of the most advanced numerical techniques for the efficient and effective solution of simulation and optimization problems governed by systems of time-dependent differential equations. The contributions present various approaches to time domain decomposition, focusing on multiple shooting and parareal algorithms. The range of topics covers theoretical analysis of the methods, as well as their algorithmic formulation and guidelines for practical implementation. Selected examples show that the discussed approaches are mandatory for the solution of challenging practical problems. The practicability and efficiency of the presented methods is illustrated by several case studies from fluid dynamics, data compression, image processing and computational biology, giving rise to possible new research topics. This volume, resulting from the workshop Multiple Shooting and Time Domain Decomposition Methods, held in Heidelberg in May 2013, will be of great interest to applied mathematicians, computer scientists and all scientists using mathematical methods.
Scientific Computing with MATLAB (R), Second Edition improves students' ability to tackle mathematical problems. It helps students understand the mathematical background and find reliable and accurate solutions to mathematical problems with the use of MATLAB, avoiding the tedious and complex technical details of mathematics. This edition retains the structure of its predecessor while expanding and updating the content of each chapter. The book bridges the gap between problems and solutions through well-grouped topics and clear MATLAB example scripts and reproducible MATLAB-generated plots. Students can effortlessly experiment with the scripts for a deep, hands-on exploration. Each chapter also includes a set of problems to strengthen understanding of the material.
This book constitutes thoroughly refereed and revised selected papers from the 10th International Symposium on Algorithms and Experiments for Sensor Systems, Wireless Networks and Distributed Robotics, ALGOSENSORS 2014, held in Wroclaw, Poland, on September 12, 2014. The 10 papers presented in this volume were carefully reviewed and selected from 20 submissions. They are organized in topical sections named: robot planning; algorithms and data structures on graphs; and wireless networks. |
![]() ![]() You may like...
Contemporary American Science Fiction…
Terence McSweeney, Stuart Joy
Hardcover
R4,452
Discovery Miles 44 520
Unmanned Robotic Systems and…
Mahmut Reyhanoglu, Geert De Cubber
Hardcover
R3,391
Discovery Miles 33 910
Driving Innovation and Productivity…
Ardavan Amini, Stephen Bushell, …
Hardcover
R7,419
Discovery Miles 74 190
Handbook of Research on Teaching With…
Gianni Panconesi, Maria Guida
Hardcover
R8,064
Discovery Miles 80 640
Health Information Technology in the…
Nir Menachemi, Sanjay Singh
Hardcover
R3,691
Discovery Miles 36 910
|